Tom Lane [Fri, 3 Jun 2016 22:07:14 +0000 (18:07 -0400)]
Mark read/write expanded values as read-only in ValuesNext(), too.
Further thought about bug #14174 motivated me to try the case of a
R/W datum being returned from a VALUES list, and sure enough it was
broken. Fix that.
Also add a regression test case exercising the same scenario for
FunctionScan. That's not broken right now, because the function's
result will get shoved into a tuplestore between generation and use;
but it could easily become broken whenever we get around to optimizing
FunctionScan better.
There don't seem to be any other places where we put the result of
expression evaluation into a virtual tuple slot that could then be
the source for Vars of further expression evaluation, so I think
this is the end of this bug.
Tom Lane [Fri, 3 Jun 2016 19:14:35 +0000 (15:14 -0400)]
Mark read/write expanded values as read-only in ExecProject().
If a plan node output expression returns an "expanded" datum, and that
output column is referenced in more than one place in upper-level plan
nodes, we need to ensure that what is returned is a read-only reference
not a read/write reference. Otherwise one of the referencing sites could
scribble on or even delete the expanded datum before we have evaluated the
others. Commit
1dc5ebc9077ab742, which introduced this feature, supposed
that it'd be sufficient to make SubqueryScan nodes force their output
columns to read-only state. The folly of that was revealed by bug #14174
from Andrew Gierth, and really should have been immediately obvious
considering that the planner will happily optimize SubqueryScan nodes
out of the plan without any regard for this issue.
The safest fix seems to be to make ExecProject() force its results into
read-only state; that will cover every case where a plan node returns
expression results. Actually we can delegate this to ExecTargetList()
since we can recursively assume that plain Vars will not reference
read-write datums. That should keep the extra overhead down to something
minimal. We no longer need ExecMakeSlotContentsReadOnly(), which was
introduced only in support of the idea that just a few plan node types
would need to do this.
In the future it would be nice to have the planner account for this problem
and inject force-to-read-only expression evaluation nodes into only the
places where there's a risk of multiple evaluation. That's not a suitable
solution for 9.5 or even 9.6 at this point, though.
Report: <
20160603124628.9932.41279@wrigleys.postgresql.org>
Tom Lane [Fri, 3 Jun 2016 15:29:20 +0000 (11:29 -0400)]
Suppress -Wunused-result warnings about write(), again.
Adopt the same solution as in commit
aa90e148ca70a235, but this time
let's put the ugliness inside the write_stderr() macro, instead of
expecting each call site to deal with it. Back-port that decision
into psql/common.c where I got the macro from in the first place.
Per gripe from Peter Eisentraut.
Tom Lane [Thu, 2 Jun 2016 17:27:53 +0000 (13:27 -0400)]
Redesign handling of SIGTERM/control-C in parallel pg_dump/pg_restore.
Formerly, Unix builds of pg_dump/pg_restore would trap SIGINT and similar
signals and set a flag that was tested in various data-transfer loops.
This was prone to errors of omission (cf commit
3c8aa6654); and even if
the client-side response was prompt, we did nothing that would cause
long-running SQL commands (e.g. CREATE INDEX) to terminate early.
Also, the master process would effectively do nothing at all upon receipt
of SIGINT; the only reason it seemed to work was that in typical scenarios
the signal would also be delivered to the child processes. We should
support termination when a signal is delivered only to the master process,
though.
Windows builds had no console interrupt handler, so they would just fall
over immediately at control-C, again leaving long-running SQL commands to
finish unmolested.
To fix, remove the flag-checking approach altogether. Instead, allow the
Unix signal handler to send a cancel request directly and then exit(1).
In the master process, also have it forward the signal to the children.
On Windows, add a console interrupt handler that behaves approximately
the same. The main difference is that a single execution of the Windows
handler can send all the cancel requests since all the info is available
in one process, whereas on Unix each process sends a cancel only for its
own database connection.
In passing, fix an old problem that DisconnectDatabase tends to send a
cancel request before exiting a parallel worker, even if nothing went
wrong. This is at least a waste of cycles, and could lead to unexpected
log messages, or maybe even data loss if it happened in pg_restore (though
in the current code the problem seems to affect only pg_dump). The cause
was that after a COPY step, pg_dump was leaving libpq in PGASYNC_BUSY
state, causing PQtransactionStatus() to report PQTRANS_ACTIVE. That's
normally harmless because the next PQexec() will silently clear the
PGASYNC_BUSY state; but in a parallel worker we might exit without any
additional SQL commands after a COPY step. So add an extra PQgetResult()
call after a COPY to allow libpq to return to PGASYNC_IDLE state.
This is a bug fix, IMO, so back-patch to 9.3 where parallel dump/restore
were introduced.
Thanks to Kyotaro Horiguchi for Windows testing and code suggestions.
Original-Patch: <7005.
1464657274@sss.pgh.pa.us>
Discussion: <
20160602.174941.
256342236.horiguchi.kyotaro@lab.ntt.co.jp>
Kevin Grittner [Thu, 2 Jun 2016 17:23:19 +0000 (12:23 -0500)]
Fix btree mark/restore bug.
Commit
2ed5b87f96d473962ec5230fd820abfeaccb2069 introduced a bug in
mark/restore, in an attempt to optimize repeated restores to the
same page. This caused an assertion failure during a merge join
which fed directly from an index scan, although the impact would
not be limited to that case. Revert the bad chunk of code from
that commit.
While investigating this bug it was discovered that a particular
"paranoia" set of the mark position field would not prevent bad
behavior; it would just make it harder to diagnose. Change that
into an assertion, which will draw attention to any future problem
in that area more directly.
Backpatch to 9.5, where the bug was introduced.
Bug #14169 reported by Shinta Koyanagi.
Preliminary analysis by Tom Lane identified which commit caused
the bug.
Tom Lane [Wed, 1 Jun 2016 20:14:21 +0000 (16:14 -0400)]
Clean up some minor inefficiencies in parallel dump/restore.
Parallel dump did a totally pointless query to find out the name of each
table to be dumped, which it already knows. Parallel restore runs issued
lots of redundant SET commands because _doSetFixedOutputState() was invoked
once per TOC item rather than just once at connection start. While the
extra queries are insignificant if you're dumping or restoring large
tables, it still seems worth getting rid of them.
Also, give the responsibility for selecting the right client_encoding for
a parallel dump worker to setup_connection() where it naturally belongs,
instead of having ad-hoc code for that in CloneArchive(). And fix some
minor bugs like use of strdup() where pg_strdup() would be safer.
Back-patch to 9.3, mostly to keep the branches in sync in an area that
we're still finding bugs in.
Discussion: <5086.
1464793073@sss.pgh.pa.us>
Tom Lane [Tue, 31 May 2016 19:54:46 +0000 (15:54 -0400)]
Avoid useless closely-spaced writes of statistics files.
The original intent in the stats collector was that we should not write out
stats data oftener than every PGSTAT_STAT_INTERVAL msec. Backends will not
make requests at all if they see the existing data is newer than that, and
the stats collector is supposed to disregard requests having a cutoff_time
older than its most recently written data, so that close-together requests
don't result in multiple writes. But the latter part of that got broken
in commit
187492b6c2e8cafc, so that if two backends concurrently decide
the existing stats are too old, the collector would write the data twice.
(In principle the collector's logic would still merge requests as long as
the second one arrives before we've actually written data ... but since
the message collection loop would write data immediately after processing
a single inquiry message, that never happened in practice, and in any case
the window in which it might work would be much shorter than
PGSTAT_STAT_INTERVAL.)
To fix, improve pgstat_recv_inquiry so that it checks whether the cutoff
time is too old, and doesn't add a request to the queue if so. This means
that we do not need DBWriteRequest.request_time, because the decision is
taken before making a queue entry. And that means that we don't really
need the DBWriteRequest data structure at all; an OID list of database
OIDs will serve and allow removal of some rather verbose and crufty code.
In passing, improve the comments in this area, which have been rather
neglected. Also change backend_read_statsfile so that it's not silently
relying on MyDatabaseId to have some particular value in the autovacuum
launcher process. It accidentally worked as desired because MyDatabaseId
is zero in that process; but that does not seem like a dependency we want,
especially with no documentation about it.
Although this patch is mine, it turns out I'd rediscovered a known bug,
for which Tomas Vondra had already submitted a patch that's functionally
equivalent to the non-cosmetic aspects of this patch. Thanks to Tomas
for reviewing this version.
Back-patch to 9.3 where the bug was introduced.
Prior-Discussion: <
1718942738eb65c8407fcd864883f4c8@fuzzy.cz>
Patch: <4625.
1464202586@sss.pgh.pa.us>
Tom Lane [Tue, 31 May 2016 16:05:22 +0000 (12:05 -0400)]
Fix typo in CREATE DATABASE syntax synopsis.
Misplaced "]", evidently a thinko in commit
213335c14.
Alvaro Herrera [Mon, 30 May 2016 18:47:22 +0000 (14:47 -0400)]
Fix PageAddItem BRIN bug
BRIN was relying on the ability to remove a tuple from an index page,
then putting another tuple in the same line pointer. But PageAddItem
refuses to add a tuple beyond the first free item past the last used
item, and in particular, it rejects an attempt to add an item to an
empty page anywhere other than the first line pointer. PageAddItem
issues a WARNING and indicates to the caller that it failed, which in
turn causes the BRIN calling code to issue a PANIC, so the whole
sequence looks like this:
WARNING: specified item offset is too large
PANIC: failed to add BRIN tuple
To fix, create a new function PageAddItemExtended which is like
PageAddItem except that the two boolean arguments become a flags bitmap;
the "overwrite" and "is_heap" boolean flags in PageAddItem become
PAI_OVERWITE and PAI_IS_HEAP flags in the new function, and a new flag
PAI_ALLOW_FAR_OFFSET enables the behavior required by BRIN.
PageAddItem() retains its original signature, for compatibility with
third-party modules (other callers in core code are not modified,
either).
Also, in the belt-and-suspenders spirit, I added a new sanity check in
brinGetTupleForHeapBlock to raise an error if an TID found in the revmap
is not marked as live by the page header. This causes it to react with
"ERROR: corrupted BRIN index" to the bug at hand, rather than a hard
crash.
Backpatch to 9.5.
Bug reported by Andreas Seltenreich as detected by his handy sqlsmith
fuzzer.
Discussion: https://www.postgresql.org/message-id/87mvni77jh.fsf@elite.ansel.ydns.eu
Tom Lane [Sun, 29 May 2016 17:18:48 +0000 (13:18 -0400)]
Fix missing abort checks in pg_backup_directory.c.
Parallel restore from directory format failed to respond to control-C
in a timely manner, because there were no checkAborting() calls in the
code path that reads data from a file and sends it to the backend.
If any worker was in the midst of restoring data for a large table,
you'd just have to wait.
This fix doesn't do anything for the problem of aborting a long-running
server-side command, but at least it fixes things for data transfers.
Back-patch to 9.3 where parallel restore was introduced.
Tom Lane [Sun, 29 May 2016 17:00:09 +0000 (13:00 -0400)]
Remove pg_dump/parallel.c's useless "aborting" flag.
This was effectively dead code, since the places that tested it could not
be reached after we entered the on-exit-cleanup routine that would set it.
It seems to have been a leftover from a design in which error abort would
try to send fresh commands to the workers --- a design which could never
have worked reliably, of course. Since the flag is not cross-platform, it
complicates reasoning about the code's behavior, which we could do without.
Although this is effectively just cosmetic, back-patch anyway, because
there are some actual bugs in the vicinity of this behavior.
Discussion: <15583.
1464462418@sss.pgh.pa.us>
Tom Lane [Sat, 28 May 2016 18:02:11 +0000 (14:02 -0400)]
Lots of comment-fixing, and minor cosmetic cleanup, in pg_dump/parallel.c.
The commentary in this file was in extremely sad shape. The author(s)
had clearly never heard of the project convention that a function header
comment should provide an API spec of some sort for that function. Much
of it was flat out wrong, too --- maybe it was accurate when written, but
if so it had not been updated to track subsequent code revisions. Rewrite
and rearrange to try to bring it up to speed, and annotate some of the
places where more work is needed. (I've refrained from actually fixing
anything of substance ... yet.)
Also, rename a couple of functions for more clarity as to what they do,
do some very minor code rearrangement, remove some pointless Asserts,
fix an incorrect Assert in readMessageFromPipe, and add a missing socket
close in one error exit from pgpipe(). The last would be a bug if we
tried to continue after pgpipe() failure, but since we don't, it's just
cosmetic at present.
Although this is only cosmetic, back-patch to 9.3 where parallel.c was
added. It's sufficiently invasive that it'll pose a hazard for future
back-patching if we don't.
Discussion: <25239.
1464386067@sss.pgh.pa.us>
Tom Lane [Fri, 27 May 2016 16:02:09 +0000 (12:02 -0400)]
Clean up thread management in parallel pg_dump for Windows.
Since we start the worker threads with _beginthreadex(), we should use
_endthreadex() to terminate them. We got this right in the normal-exit
code path, but not so much during an error exit from a worker.
In addition, be sure to apply CloseHandle to the thread handle after
each thread exits.
It's not clear that these oversights cause any user-visible problems,
since the pg_dump run is about to terminate anyway. Still, it's clearly
better to follow Microsoft's API specifications than ignore them.
Also a few cosmetic cleanups in WaitForTerminatingWorkers(), including
being a bit less random about where to cast between uintptr_t and HANDLE,
and being sure to clear the worker identity field for each dead worker
(not that false matches should be possible later, but let's be careful).
Original observation and patch by Armin Schöffmann, cosmetic improvements
by Michael Paquier and me. (Armin's patch also included closing sockets
in ShutdownWorkersHard(), but that's been dealt with already in commit
df8d2d8c4.) Back-patch to 9.3 where parallel pg_dump was introduced.
Discussion: <zarafa.
570306bd.3418.
074bf1420d8f2ba2@root.aegaeon.de>
Tom Lane [Fri, 27 May 2016 14:40:20 +0000 (10:40 -0400)]
Be more predictable about reporting "lock timeout" vs "statement timeout".
If both timeout indicators are set when we arrive at ProcessInterrupts,
we've historically just reported "lock timeout". However, some buildfarm
members have been observed to fail isolationtester's timeouts test by
reporting "lock timeout" when the statement timeout was expected to fire
first. The cause seems to be that the process is allowed to sleep longer
than expected (probably due to heavy machine load) so that the lock
timeout happens before we reach the point of reporting the error, and
then this arbitrary tiebreak rule does the wrong thing. We can improve
matters by comparing the scheduled timeout times to decide which error
to report.
I had originally proposed greatly reducing the 1-second window between
the two timeouts in the test cases. On reflection that is a bad idea,
at least for the case where the lock timeout is expected to fire first,
because that would assume that it takes negligible time to get from
statement start to the beginning of the lock wait. Thus, this patch
doesn't completely remove the risk of test failures on slow machines.
Empirically, however, the case this handles is the one we are seeing
in the buildfarm. The explanation may be that the other case requires
the scheduler to take the CPU away from a busy process, whereas the
case fixed here only requires the scheduler to not give the CPU back
right away to a process that has been woken from a multi-second sleep
(and, perhaps, has been swapped out meanwhile).
Back-patch to 9.3 where the isolationtester timeouts test was added.
Discussion: <8693.
1464314819@sss.pgh.pa.us>
Magnus Hagander [Thu, 26 May 2016 20:14:23 +0000 (22:14 +0200)]
Make pg_dump error cleanly with -j against hot standby
Getting a synchronized snapshot is not supported on a hot standby node,
and is by default taken when using -j with multiple sessions. Trying to
do so still failed, but with a server error that would also go in the
log. Instead, proprely detect this case and give a better error message.
Alvaro Herrera [Thu, 26 May 2016 15:58:22 +0000 (11:58 -0400)]
Fix typo in 9.5 release nodes
Noted by 星合 拓馬 (HOSHIAI Takuma)
Tom Lane [Thu, 26 May 2016 15:51:04 +0000 (11:51 -0400)]
Make pg_dump behave more sanely when built without HAVE_LIBZ.
For some reason the code to emit a warning and switch to uncompressed
output was placed down in the guts of pg_backup_archiver.c. This is
definitely too late in the case of parallel operation (and I rather
wonder if it wasn't too late for other purposes as well). Put it in
pg_dump.c's option-processing logic, which seems a much saner place.
Also, the default behavior with custom or directory output format was
to emit the warning telling you the output would be uncompressed. This
seems unhelpful, so silence that case.
Back-patch to 9.3 where parallel dump was introduced.
Kyotaro Horiguchi, adjusted a bit by me
Report: <
20160526.185551.
242041780.horiguchi.kyotaro@lab.ntt.co.jp>
Tom Lane [Thu, 26 May 2016 14:50:30 +0000 (10:50 -0400)]
In Windows pg_dump, ensure idle workers will shut down during error exit.
The Windows coding of ShutdownWorkersHard() thought that setting termEvent
was sufficient to make workers exit after an error. But that only helps
if a worker is busy and passes through checkAborting(). An idle worker
will just sit, resulting in pg_dump failing to exit until the user gives up
and hits control-C. We should close the write end of the command pipe
so that idle workers will see socket EOF and exit, as the Unix coding was
already doing.
Back-patch to 9.3 where parallel pg_dump was introduced.
Kyotaro Horiguchi
Tom Lane [Wed, 25 May 2016 21:48:15 +0000 (17:48 -0400)]
Ensure that backends see up-to-date statistics for shared catalogs.
Ever since we split the statistics collector's reports into per-database
files (commit
187492b6c2e8cafc), backends have been seeing stale statistics
for shared catalogs. This is because the inquiry message only prompts the
collector to write the per-database file for the requesting backend's own
database. Stats for shared catalogs are in a separate file for "DB 0",
which didn't get updated.
In normal operation this was partially masked by the fact that the
autovacuum launcher would send an inquiry message at least once per
autovacuum_naptime that asked for "DB 0"; so the shared-catalog stats would
never be more than a minute out of date. However the problem becomes very
obvious with autovacuum disabled, as reported by Peter Eisentraut.
To fix, redefine the semantics of inquiry messages so that both the
specified DB and DB 0 will be dumped. (This might seem a bit inefficient,
but we have no good way to know whether a backend's transaction will look
at shared-catalog stats, so we have to read both groups of stats whenever
we request stats. Sending two inquiry messages would definitely not be
better.)
Back-patch to 9.3 where the bug was introduced.
Report: <
56AD41AC.
1030509@gmx.net>
Tom Lane [Wed, 25 May 2016 16:39:57 +0000 (12:39 -0400)]
Fix broken error handling in parallel pg_dump/pg_restore.
In the original design for parallel dump, worker processes reported errors
by sending them up to the master process, which would print the messages.
This is unworkably fragile for a couple of reasons: it risks deadlock if a
worker sends an error at an unexpected time, and if the master has already
died for some reason, the user will never get to see the error at all.
Revert that idea and go back to just always printing messages to stderr.
This approach means that if all the workers fail for similar reasons (eg,
bad password or server shutdown), the user will see N copies of that
message, not only one as before. While that's slightly annoying, it's
certainly better than not seeing any message; not to mention that we
shouldn't assume that only the first failure is interesting.
An additional problem in the same area was that the master failed to
disable SIGPIPE (at least until much too late), which meant that sending a
command to an already-dead worker would cause the master to crash silently.
That was bad enough in itself but was made worse by the total reliance on
the master to print errors: even if the worker had reported an error, you
would probably not see it, depending on timing. Instead disable SIGPIPE
right after we've forked the workers, before attempting to send them
anything.
Additionally, the master relies on seeing socket EOF to realize that a
worker has exited prematurely --- but on Windows, there would be no EOF
since the socket is attached to the process that includes both the master
and worker threads, so it remains open. Make archive_close_connection()
close the worker end of the sockets so that this acts more like the Unix
case. It's not perfect, because if a worker thread exits without going
through exit_nicely() the closures won't happen; but that's not really
supposed to happen.
This has been wrong all along, so back-patch to 9.3 where parallel dump
was introduced.
Report: <2458.
1450894615@sss.pgh.pa.us>
Tom Lane [Tue, 24 May 2016 19:47:51 +0000 (15:47 -0400)]
Fetch XIDs atomically during vac_truncate_clog().
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid
in-place, it's unsafe to assume that successive reads of those values will
give consistent results. Fetch each one just once to ensure sane behavior
in the minimum calculation. Noted while reviewing Alexander Korotkov's
patch in the same area.
Discussion: <8564.
1464116473@sss.pgh.pa.us>
Tom Lane [Tue, 24 May 2016 19:20:12 +0000 (15:20 -0400)]
Avoid consuming an XID during vac_truncate_clog().
vac_truncate_clog() uses its own transaction ID as the comparison point in
a sanity check that no database's datfrozenxid has already wrapped around
"into the future". That was probably fine when written, but in a lazy
vacuum we won't have assigned an XID, so calling GetCurrentTransactionId()
causes an XID to be assigned when otherwise one would not be. Most of the
time that's not a big problem ... but if we are hard up against the
wraparound limit, consuming XIDs during antiwraparound vacuums is a very
bad thing.
Instead, use ReadNewTransactionId(), which not only avoids this problem
but is in itself a better comparison point to test whether wraparound
has already occurred.
Report and patch by Alexander Korotkov. Back-patch to all versions.
Report: <CAPpHfdspOkmiQsxh-UZw2chM6dRMwXAJGEmmbmqYR=yvM7-s6A@mail.gmail.com>
Tom Lane [Mon, 23 May 2016 23:23:36 +0000 (19:23 -0400)]
Support IndexElem in raw_expression_tree_walker().
Needed for cases in which INSERT ... ON CONFLICT appears inside a
recursive CTE item. Per bug #14153 from Thomas Alton.
Patch by Peter Geoghegan, slightly adjusted by me
Report: <
20160521232802.22598.13537@wrigleys.postgresql.org>
Tom Lane [Mon, 23 May 2016 18:16:40 +0000 (14:16 -0400)]
Fix latent crash in do_text_output_multiline().
do_text_output_multiline() would fail (typically with a null pointer
dereference crash) if its input string did not end with a newline. Such
cases do not arise in our current sources; but it certainly could happen
in future, or in extension code's usage of the function, so we should fix
it. To fix, replace "eol += len" with "eol = text + len".
While at it, make two cosmetic improvements: mark the input string const,
and rename the argument from "text" to "txt" to dodge pgindent strangeness
(since "text" is a typedef name).
Even though this problem is only latent at present, it seems like a good
idea to back-patch the fix, since it's a very simple/safe patch and it's
not out of the realm of possibility that we might in future back-patch
something that expects sane behavior from do_text_output_multiline().
Per report from Hao Lee.
Report: <CAGoxFiFPAGyPAJLcFxTB5cGhTW2yOVBDYeqDugYwV4dEd1L_Ag@mail.gmail.com>
Tom Lane [Fri, 20 May 2016 19:51:57 +0000 (15:51 -0400)]
Further improve documentation about --quote-all-identifiers switch.
Mention it in the Notes section too, per suggestion from David Johnston.
Discussion: <
20160520165824.22598.31426@wrigleys.postgresql.org>
Tom Lane [Fri, 20 May 2016 18:59:48 +0000 (14:59 -0400)]
Improve documentation about pg_dump's --quote-all-identifiers switch.
Per bug #14152 from Alejandro Martínez. Back-patch to all supported
branches.
Discussion: <
20160520165824.22598.31426@wrigleys.postgresql.org>
Peter Eisentraut [Sat, 14 May 2016 01:24:13 +0000 (21:24 -0400)]
doc: Fix typo
From: Alexander Law <exclusion@gmail.com>
Tom Lane [Fri, 13 May 2016 00:04:12 +0000 (20:04 -0400)]
Ensure plan stability in contrib/btree_gist regression test.
Buildfarm member skink failed with symptoms suggesting that an
auto-analyze had happened and changed the plan displayed for a
test query. Although this is evidently of low probability,
regression tests that sometimes fail are no fun, so add commands
to force a bitmap scan to be chosen.
Alvaro Herrera [Thu, 12 May 2016 19:02:49 +0000 (16:02 -0300)]
Fix bogus comments
Some comments mentioned XLogReplayBuffer, but there's no such function:
that was an interim name for a function that got renamed to
XLogReadBufferForRedo, before commit
2c03216d831160 was pushed.
Alvaro Herrera [Thu, 12 May 2016 18:36:51 +0000 (15:36 -0300)]
Fix obsolete comment
Tom Lane [Wed, 11 May 2016 21:06:53 +0000 (17:06 -0400)]
Fix infer_arbiter_indexes() to not barf on system columns.
While it could be argued that rejecting system column mentions in the
ON CONFLICT list is an unsupported feature, falling over altogether
just because the table has a unique index on OID is indubitably a bug.
As far as I can tell, fixing infer_arbiter_indexes() is sufficient to
make ON CONFLICT (oid) actually work, though making a regression test
for that case is problematic because of the impossibility of setting
the OID counter to a known value.
Minor cosmetic cleanups along with the bug fix.
Tom Lane [Wed, 11 May 2016 20:20:03 +0000 (16:20 -0400)]
Fix assorted missing infrastructure for ON CONFLICT.
subquery_planner() failed to apply expression preprocessing to the
arbiterElems and arbiterWhere fields of an OnConflictExpr. No doubt the
theory was that this wasn't necessary because we don't actually try to
execute those expressions; but that's wrong, because it results in failure
to match to index expressions or index predicates that are changed at all
by preprocessing. Per bug #14132 from Reynold Smith.
Also add pullup_replace_vars processing for onConflictWhere. Perhaps
it's impossible to have a subquery reference there, but I'm not exactly
convinced; and even if true today it's a failure waiting to happen.
Also add some comments to other places where one or another field of
OnConflictExpr is intentionally ignored, with explanation as to why it's
okay to do so.
Also, catalog/dependency.c failed to record any dependency on the named
constraint in ON CONFLICT ON CONSTRAINT, allowing such a constraint to
be dropped while rules exist that depend on it, and allowing pg_dump to
dump such a rule before the constraint it refers to. The normal execution
path managed to error out reasonably for a dangling constraint reference,
but ruleutils.c dumped core; so in addition to fixing the omission, add
a protective check in ruleutils.c, since we can't retroactively add a
dependency in existing databases.
Back-patch to 9.5 where this code was introduced.
Report: <
20160510190350.2608.48667@wrigleys.postgresql.org>
Alvaro Herrera [Tue, 10 May 2016 19:23:54 +0000 (16:23 -0300)]
Fix autovacuum for shared relations
The table-skipping logic in autovacuum would fail to consider that
multiple workers could be processing the same shared catalog in
different databases. This normally wouldn't be a problem: firstly
because autovacuum workers not for wraparound would simply ignore tables
in which they cannot acquire lock, and secondly because most of the time
these tables are small enough that even if multiple for-wraparound
workers are stuck in the same catalog, they would be over pretty
quickly. But in cases where the catalogs are severely bloated it could
become a problem.
Backpatch all the way back, because the problem has been there since the
beginning.
Reported by Ondřej Světlík
Discussion: https://www.postgresql.org/message-id/
572B63B1.
3030603%40flexibee.eu
https://www.postgresql.org/message-id/
572A1072.
5080308%40flexibee.eu
Tom Lane [Mon, 9 May 2016 20:50:23 +0000 (16:50 -0400)]
Stamp 9.5.3.
Peter Eisentraut [Mon, 9 May 2016 14:05:46 +0000 (10:05 -0400)]
Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash:
7a7a803d44fad7952cf6b1a1da29df2b06b1b380
Tom Lane [Sat, 7 May 2016 21:26:24 +0000 (17:26 -0400)]
Release notes for 9.5.3, 9.4.8, 9.3.13, 9.2.17, 9.1.22.
Tom Lane [Sat, 7 May 2016 17:16:50 +0000 (13:16 -0400)]
Docs: improve warnings about nextval() not producing gapless sequences.
In the documentation for nextval(), point out explicitly that INSERT ...
ON CONFLICT will call nextval() if needed for the insertion case, whether
or not it ends up following the ON CONFLICT path. This seems to be a
matter of some confusion, cf bug #14126, so let's be clear about it.
Also mention the issue in the CREATE SEQUENCE reference page, since that
is another place where people might expect such things to be covered.
Minor wording improvements nearby, as well.
Back-patch to 9.5 where ON CONFLICT was introduced.
Peter Eisentraut [Fri, 8 Apr 2016 17:48:14 +0000 (13:48 -0400)]
Distrust external OpenSSL clients; clear err queue
OpenSSL has an unfortunate tendency to mix per-session state error
handling with per-thread error handling. This can cause problems when
programs that link to libpq with OpenSSL enabled have some other use of
OpenSSL; without care, one caller of OpenSSL may cause problems for the
other caller. Backend code might similarly be affected, for example
when a third party extension independently uses OpenSSL without taking
the appropriate precautions.
To fix, don't trust other users of OpenSSL to clear the per-thread error
queue. Instead, clear the entire per-thread queue ahead of certain I/O
operations when it appears that there might be trouble (these I/O
operations mostly need to call SSL_get_error() to check for success,
which relies on the queue being empty). This is slightly aggressive,
but it's pretty clear that the other callers have a very dubious claim
to ownership of the per-thread queue. Do this is both frontend and
backend code.
Finally, be more careful about clearing our own error queue, so as to
not cause these problems ourself. It's possibly that control previously
did not always reach SSLerrmessage(), where ERR_get_error() was supposed
to be called to clear the queue's earliest code. Make sure
ERR_get_error() is always called, so as to spare other users of OpenSSL
the possibility of similar problems caused by libpq (as opposed to
problems caused by a third party OpenSSL library like PHP's OpenSSL
extension). Again, do this is both frontend and backend code.
See bug #12799 and https://bugs.php.net/bug.php?id=68276
Based on patches by Dave Vitek and Peter Eisentraut.
From: Peter Geoghegan <pg@bowt.ie>
Peter Eisentraut [Sat, 7 May 2016 03:45:12 +0000 (23:45 -0400)]
Fix SSL tests
These were accidentally broken by the great backpatching of
331828b754378733cb5c2e49227603e7354e4e39.
Tom Lane [Sat, 7 May 2016 02:05:51 +0000 (22:05 -0400)]
Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old.
This patch essentially reverts commit
4c6780fd17aa43ed, in favor of a much
simpler solution for the case where the new cluster would choose to create
a TOAST table but the old cluster doesn't have one: just don't create a
TOAST table.
The existing code failed in at least two different ways if the situation
arose: (1) ALTER TABLE RESET didn't grab an exclusive lock, so that the
lock sanity check in create_toast_table failed; (2) pg_upgrade did not
provide a pg_type OID for the new toast table, so that the crosscheck in
TypeCreate failed. While both these problems were introduced by later
patches, they show that the hack being used to cause TOAST table creation
is overwhelmingly fragile (and untested). I also note that before the
TypeCreate crosscheck was added, the code would have resulted in assigning
an indeterminate pg_type OID to the toast table, possibly causing a later
OID conflict in that catalog; so that it didn't really work even when
committed.
If we simply don't create a TOAST table, there will only be a problem if
the code tries to store a tuple that's wider than a page, and field
compression isn't sufficient to get it under a page. Given that the TOAST
creation threshold is intended to be about a quarter of a page, it's very
hard to believe that cross-version differences in the do-we-need-a-toast-
table heuristic could result in an observable problem. So let's just
follow the old version's conclusion about whether a TOAST table is needed.
(If we ever do change needs_toast_table() so much that this conclusion
doesn't apply, we can devise a solution at that time, and hopefully do
it in a less klugy way than
4c6780fd17aa43ed did.)
Back-patch to 9.3, like the previous patch.
Discussion: <8110.
1462291671@sss.pgh.pa.us>
Tom Lane [Fri, 6 May 2016 16:09:20 +0000 (12:09 -0400)]
Fix possible read past end of string in to_timestamp().
to_timestamp() handles the TH/th format codes by advancing over two input
characters, whatever those are. It failed to notice whether there were
two characters available to be skipped, making it possible to advance
the pointer past the end of the input string and keep on parsing.
A similar risk existed in the handling of "Y,YYY" format: it would advance
over three characters after the "," whether or not three characters were
available.
In principle this might be exploitable to disclose contents of server
memory. But the security team concluded that it would be very hard to use
that way, because the parsing loop would stop upon hitting any zero byte,
and TH/th format codes can't be consecutive --- they have to follow some
other format code, which would have to match whatever data is there.
So it seems impractical to examine memory very much beyond the end of the
input string via this bug; and the input string will always be in local
memory not in disk buffers, making it unlikely that anything very
interesting is close to it in a predictable way. So this doesn't quite
rise to the level of needing a CVE.
Thanks to Wolf Roediger for reporting this bug.
Tom Lane [Fri, 6 May 2016 00:08:58 +0000 (20:08 -0400)]
Update time zone data files to tzdata release 2016d.
DST law changes in Russia (Magadan, Tomsk regions) and Venezuela.
Historical corrections for Russia. There are new zone names Europe/Kirov
and Asia/Tomsk reflecting the fact that these regions now have different
time zone histories from adjacent regions.
Tom Lane [Thu, 5 May 2016 16:33:13 +0000 (12:33 -0400)]
Fix ordering/categorization of some recently-added system views.
Somebody added pg_replication_origin, pg_replication_origin_status and
pg_replication_slots to catalogs.sgml without a whole lot of concern for
either alphabetical order or the difference between a table and a view.
Clean up the mess.
Back-patch to 9.5, not so much because this is critical as because if
I don't it will result in a cross-branch divergence in release-9.5.sgml,
which would be a maintenance hazard.
Peter Eisentraut [Wed, 4 May 2016 18:07:00 +0000 (14:07 -0400)]
doc: Fix more typos
From: Alexander Law <exclusion@gmail.com>
Peter Eisentraut [Wed, 4 May 2016 01:06:25 +0000 (21:06 -0400)]
doc: Fix typos
From: Alexander Law <exclusion@gmail.com>
Tom Lane [Mon, 2 May 2016 15:18:10 +0000 (11:18 -0400)]
Fix configure's incorrect version tests for flex and perl.
awk's equality-comparison operator is "==" not "=". We got this right
in many places, but not in configure's checks for supported version
numbers of flex and perl. It hadn't been noticed because unsupported
versions are so old as to be basically extinct in the wild, and because
the only consequence is whether or not a WARNING flies by during
configure.
Daniel Gustafsson noted the problem with respect to the test for flex,
I found the other by reviewing other awk calls.
Heikki Linnakangas [Mon, 2 May 2016 07:07:49 +0000 (10:07 +0300)]
Remove unused macros.
CHECK_PAGE_OFFSET_RANGE() has been unused forever.
CHECK_RELATION_BLOCK_RANGE() has been unused in pgstatindex.c ever since
bt_page_stats() and bt_page_items() functions were moved from pgstattuple
to pageinspect module. It still exists in pageinspect/btreefuncs.c.
Daniel Gustafsson
Peter Eisentraut [Mon, 2 May 2016 01:33:31 +0000 (21:33 -0400)]
doc: Fix typo
From: Guillaume Lelarge <guillaume@lelarge.info>
Tom Lane [Sat, 30 Apr 2016 00:19:38 +0000 (20:19 -0400)]
Fix mishandling of equivalence-class tests in parameterized plans.
Given a three-or-more-way equivalence class, such as X.Y = Y.Y = Z.Z,
it was possible for the planner to omit one of the quals needed to
enforce that all members of the equivalence class are actually equal.
This only happened in the case of a parameterized join node for two
of the relations, that is a plan tree like
Nested Loop
-> Scan X
-> Nested Loop
-> Scan Y
-> Scan Z
Filter: Z.Z = X.X
The eclass machinery normally expects to apply X.X = Y.Y when those
two relations are joined, but in this shape of plan tree they aren't
joined until the top node --- and, if the lower nested loop is marked
as parameterized by X, the top node will assume that the relevant eclass
condition(s) got pushed down into the lower node. On the other hand,
the scan of Z assumes that it's only responsible for constraining Z.Z
to match any one of the other eclass members. So one or another of
the required quals sometimes fell between the cracks, depending on
whether consideration of the eclass in get_joinrel_parampathinfo()
for the lower nested loop chanced to generate X.X = Y.Y or X.X = Z.Z
as the appropriate constraint there. If it generated the latter,
it'd erroneously suppose that the Z scan would take care of matters.
To fix, force X.X = Y.Y to be generated and applied at that join node
when this case occurs.
This is *extremely* hard to hit in practice, because various planner
behaviors conspire to mask the problem; starting with the fact that the
planner doesn't really like to generate a parameterized plan of the
above shape. (It might have been impossible to hit it before we
tweaked things to allow this plan shape for star-schema cases.) Many
thanks to Alexander Kirkouski for submitting a reproducible test case.
The bug can be demonstrated in all branches back to 9.2 where parameterized
paths were introduced, so back-patch that far.
Andrew Dunstan [Fri, 29 Apr 2016 18:18:51 +0000 (14:18 -0400)]
Fix comment whitespace in VS2105 patch
per gripe from Michael Paquier.
Andrew Dunstan [Fri, 29 Apr 2016 13:49:31 +0000 (09:49 -0400)]
Fix typo in VS2015 patch
reported by Christian Ullrich
Andrew Dunstan [Fri, 29 Apr 2016 11:59:47 +0000 (07:59 -0400)]
Support building with Visual Studio 2015
Adjust the way we detect the locale. As a result the minumum Windows
version supported by VS2015 and later is Windows Vista. Add some tweaks
to remove new compiler warnings. Remove documentation references to the
now obsolete msysGit.
Michael Paquier, somewhat edited by me, reviewed by Christian Ullrich.
Backpatch to 9.5
Andres Freund [Fri, 29 Apr 2016 05:09:48 +0000 (22:09 -0700)]
Remember asking for feedback during walsender shutdown.
Since
5a991ef8 we're explicitly asking for feedback from the receiving
side when shutting down walsender, if there's not yet replicated
data.
Unfortunately we didn't remember (i.e. set waiting_for_ping_response to
true) having asked for feedback, leading to scenarios in which replies
were requested at a high frequency.
I can't reproduce this problem on my laptop, I think that's because the
problem requires a significant TCP window to manifest due to the
!pq_is_send_pending() condition. But since this clearly is a bug, let's
fix it. There's quite possibly more wrong than just this though.
While fiddling with WalSndDone(), I rewrote a hard to understand comment
about looking at the flush vs. the write position.
Reported-By: Nick Cleaton, Magnus Hagander
Author: Nick Cleaton
Discussion: CAFgz3kus=rC_avEgBV=+hRK5HYJ8vXskJRh8yEAbahJGTzF2VQ@mail.gmail.com
CABUevExsjROqDcD0A2rnJ6HK6FuKGyewJr3PL12pw85BHFGS2Q@mail.gmail.com
Backpatch: 9.4, were
5a991ef8 introduced the use of feedback messages
during shutdown.
Tom Lane [Thu, 28 Apr 2016 15:50:58 +0000 (11:50 -0400)]
Adjust DatumGetBool macro, this time for sure.
Commit
23a41573c attempted to fix the DatumGetBool macro to ignore bits
in a Datum that are to the left of the actual bool value. But it did that
by casting the Datum to bool; and on compilers that use C99 semantics for
bool, that ends up being a whole-word test, not a 1-byte test. This seems
to be the true explanation for contrib/seg failing in VS2015. To fix, use
GET_1_BYTE() explicitly. I think in the previous patch, I'd had some idea
of not having to commit to bool being exactly 1 byte wide, but regardless
of what the compiler's bool is, boolean columns and Datums are certainly
1 byte wide.
The previous fix was (eventually) back-patched into all active versions,
so do likewise with this one.
Tom Lane [Thu, 28 Apr 2016 15:48:10 +0000 (11:48 -0400)]
Revert "Convert contrib/seg's bool-returning SQL functions to V1 call convention."
This reverts commit
b1dd2f86ce7d43f23f6aae307bb22de826849e7d.
That turns out to have been based on a faulty diagnosis of why the
VS2015 build was misbehaving. Instead, we need to fix DatumGetBool().
Noah Misch [Wed, 27 Apr 2016 01:53:58 +0000 (21:53 -0400)]
Impose a full barrier in generic-xlc.h atomics functions.
pg_atomic_compare_exchange_*_impl() were providing only the semantics of
an acquire barrier. Buildfarm members hornet and mandrill revealed this
deficit beginning with commit
008608b9d51061b1f598c197477b3dc7be9c4a64.
While we have no report of symptoms in 9.5, we can't rule out the
possibility of certain compilers, hardware, or extension code relying on
these functions' specified barrier semantics. Back-patch to 9.5, where
commit
b64d92f1a5602c55ee8b27a7ac474f03b7aee340 introduced atomics.
Reviewed by Andres Freund.
Peter Eisentraut [Mon, 25 Apr 2016 00:44:22 +0000 (20:44 -0400)]
doc: Fix typo
From: Andreas Seltenreich <andreas.seltenreich@credativ.de>
Tom Lane [Sat, 23 Apr 2016 20:53:15 +0000 (16:53 -0400)]
Rename strtoi() to strtoint().
NetBSD has seen fit to invent a libc function named strtoi(), which
conflicts with the long-established static functions of the same name in
datetime.c and ecpg's interval.c. While muttering darkly about intrusions
on application namespace, we'll rename our functions to avoid the conflict.
Back-patch to all supported branches, since this would affect attempts
to build any of them on recent NetBSD.
Thomas Munro
Peter Eisentraut [Sat, 23 Apr 2016 18:48:02 +0000 (14:48 -0400)]
doc: Fix typos
From: Erik Rijkers <er@xs4all.nl>
Tom Lane [Fri, 22 Apr 2016 15:54:23 +0000 (11:54 -0400)]
Convert contrib/seg's bool-returning SQL functions to V1 call convention.
It appears that we can no longer get away with using V0 call convention
for bool-returning functions in newer versions of MSVC. The compiler
seems to generate code that doesn't clear the higher-order bits of the
result register, causing the bool result Datum to often read as "true"
when "false" was intended. This is not very surprising, since the
function thinks it's returning a bool-width result but fmgr_oldstyle
assumes that V0 functions return "char *"; what's surprising is that
that hack worked for so long on so many platforms.
The only functions of this description in core+contrib are in contrib/seg,
which we'd intentionally left mostly in V0 style to serve as a warning
canary if V0 call convention breaks. We could imagine hacking things
so that they're still V0 (we'd have to redeclare the bool-returning
functions as returning some suitably wide integer type, like size_t,
at the C level). But on the whole it seems better to convert 'em to V1.
We can still leave the pointer- and int-returning functions in V0 style,
so that the test coverage isn't gone entirely.
Back-patch to 9.5, since our intention is to support VS2015 in 9.5
and later. There's no SQL-level change in the functions' behavior
so back-patching should be safe enough.
Discussion: <22094.
1461273324@sss.pgh.pa.us>
Michael Paquier, adjusted some by me
Magnus Hagander [Fri, 22 Apr 2016 09:18:59 +0000 (05:18 -0400)]
Add putenv support for msvcrt from Visual Studio 2013
This was missed when VS 2013 support was added.
Michael Paquier
Tom Lane [Fri, 22 Apr 2016 03:17:36 +0000 (23:17 -0400)]
Fix unexpected side-effects of operator_precedence_warning.
The implementation of that feature involves injecting nodes into the
raw parsetree where explicit parentheses appear. Various places in
parse_expr.c that test to see "is this child node of type Foo" need to
look through such nodes, else we'll get different behavior when
operator_precedence_warning is on than when it is off. Note that we only
need to handle this when testing untransformed child nodes, since the
AEXPR_PAREN nodes will be gone anyway after transformExprRecurse.
Per report from Scott Ribe and additional code-reading. Back-patch
to 9.5 where this feature was added.
Report: <
ED37E303-1B0A-4CD8-8E1E-
B9C4C2DD9A17@elevated-dev.com>
Tom Lane [Fri, 22 Apr 2016 00:05:58 +0000 (20:05 -0400)]
Fix planner failure with full join in RHS of left join.
Given a left join containing a full join in its righthand side, with
the left join's joinclause referencing only one side of the full join
(in a non-strict fashion, so that the full join doesn't get simplified),
the planner could fail with "failed to build any N-way joins" or related
errors. This happened because the full join was seen as overlapping the
left join's RHS, and then recent changes within join_is_legal() caused
that function to conclude that the full join couldn't validly be formed.
Rather than try to rejigger join_is_legal() yet more to allow this,
I think it's better to fix initsplan.c so that the required join order
is explicit in the SpecialJoinInfo data structure. The previous coding
there essentially ignored full joins, relying on the fact that we don't
flatten them in the joinlist data structure to preserve their ordering.
That's sufficient to prevent a wrong plan from being formed, but as this
example shows, it's not sufficient to ensure that the right plan will
be formed. We need to work a bit harder to ensure that the right plan
looks sane according to the SpecialJoinInfos.
Per bug #14105 from Vojtech Rylko. This was apparently induced by
commit
8703059c6 (though now that I've seen it, I wonder whether there
are related cases that could have failed before that); so back-patch
to all active branches. Unfortunately, that patch also went into 9.0,
so this bug is a regression that won't be fixed in that branch.
Tom Lane [Thu, 21 Apr 2016 20:58:47 +0000 (16:58 -0400)]
Improve TranslateSocketError() to handle more Windows error codes.
The coverage was rather lean for cases that bind() or listen() might
return. Add entries for everything that there's a direct equivalent
for in the set of Unix errnos that elog.c has heard of.
Tom Lane [Thu, 21 Apr 2016 20:16:19 +0000 (16:16 -0400)]
Remove dead code in win32.h.
There's no longer a need for the MSVC-version-specific code stanza that
forcibly redefines errno code symbols, because since commit
73838b52 we're
unconditionally redefining them in the stanza before this one anyway.
Now it's merely confusing and ugly, so get rid of it; and improve the
comment that explains what's going on here.
Although this is just cosmetic, back-patch anyway since I'm intending
to back-patch some less-cosmetic changes in this same hunk of code.
Tom Lane [Thu, 21 Apr 2016 19:44:18 +0000 (15:44 -0400)]
Provide errno-translation wrappers around bind() and listen() on Windows.
Fix Windows builds to report something useful rather than "could not bind
IPv4 socket: No error" when bind() fails.
Back-patch of commits
d1b7d4877b9a71f4 and
22989a8e34168f57.
Discussion: <4065.
1452450340@sss.pgh.pa.us>
Tom Lane [Thu, 21 Apr 2016 18:20:18 +0000 (14:20 -0400)]
Fix ruleutils.c's dumping of ScalarArrayOpExpr containing an EXPR_SUBLINK.
When we shoehorned "x op ANY (array)" into the SQL syntax, we created a
fundamental ambiguity as to the proper treatment of a sub-SELECT on the
righthand side: perhaps what's meant is to compare x against each row of
the sub-SELECT's result, or perhaps the sub-SELECT is meant as a scalar
sub-SELECT that delivers a single array value whose members should be
compared against x. The grammar resolves it as the former case whenever
the RHS is a select_with_parens, making the latter case hard to reach ---
but you can get at it, with tricks such as attaching a no-op cast to the
sub-SELECT. Parse analysis would throw away the no-op cast, leaving a
parsetree with an EXPR_SUBLINK SubLink directly under a ScalarArrayOpExpr.
ruleutils.c was not clued in on this fine point, and would naively emit
"x op ANY ((SELECT ...))", which would be parsed as the first alternative,
typically leading to errors like "operator does not exist: text = text[]"
during dump/reload of a view or rule containing such a construct. To fix,
emit a no-op cast when dumping such a parsetree. This might well be
exactly what the user wrote to get the construct accepted in the first
place; and even if she got there with some other dodge, it is a valid
representation of the parsetree.
Per report from Karl Czajkowski. He mentioned only a case involving
RLS policies, but actually the problem is very old, so back-patch to
all supported branches.
Report: <
20160421001832.GB7976@moraine.isi.edu>
Tom Lane [Thu, 21 Apr 2016 03:48:13 +0000 (23:48 -0400)]
Honor PGCTLTIMEOUT environment variable for pg_regress' startup wait.
In commit
2ffa86962077c588 we made pg_ctl recognize an environment variable
PGCTLTIMEOUT to set the default timeout for starting and stopping the
postmaster. However, pg_regress uses pg_ctl only for the "stop" end of
that; it has bespoke code for starting the postmaster, and that code has
historically had a hard-wired 60-second timeout. Further buildfarm
experience says it'd be a good idea if that timeout were also controlled
by PGCTLTIMEOUT, so let's make it so. Like the previous patch, back-patch
to all active branches.
Discussion: <13969.
1461191936@sss.pgh.pa.us>
Tom Lane [Wed, 20 Apr 2016 18:25:15 +0000 (14:25 -0400)]
Fix memory leak and other bugs in ginPlaceToPage() & subroutines.
Commit
36a35c550ac114ca turned the interface between ginPlaceToPage and
its subroutines in gindatapage.c and ginentrypage.c into a royal mess:
page-update critical sections were started in one place and finished in
another place not even in the same file, and the very same subroutine
might return having started a critical section or not. Subsequent patches
band-aided over some of the problems with this design by making things
even messier.
One user-visible resulting problem is memory leaks caused by the need for
the subroutines to allocate storage that would survive until ginPlaceToPage
calls XLogInsert (as reported by Julien Rouhaud). This would not typically
be noticeable during retail index updates. It could be visible in a GIN
index build, in the form of memory consumption swelling to several times
the commanded maintenance_work_mem.
Another rather nasty problem is that in the internal-page-splitting code
path, we would clear the child page's GIN_INCOMPLETE_SPLIT flag well before
entering the critical section that it's supposed to be cleared in; a
failure in between would leave the index in a corrupt state. There were
also assorted coding-rule violations with little immediate consequence but
possible long-term hazards, such as beginning an XLogInsert sequence before
entering a critical section, or calling elog(DEBUG) inside a critical
section.
To fix, redefine the API between ginPlaceToPage() and its subroutines
by splitting the subroutines into two parts. The "beginPlaceToPage"
subroutine does what can be done outside a critical section, including
full computation of the result pages into temporary storage when we're
going to split the target page. The "execPlaceToPage" subroutine is called
within a critical section established by ginPlaceToPage(), and it handles
the actual page update in the non-split code path. The critical section,
as well as the XLOG insertion call sequence, are both now always started
and finished in ginPlaceToPage(). Also, make ginPlaceToPage() create and
work in a short-lived memory context to eliminate the leakage problem.
(Since a short-lived memory context had been getting created in the most
common code path in the subroutines, this shouldn't cause any noticeable
performance penalty; we're just moving the overhead up one call level.)
In passing, fix a bunch of comments that had gone unmaintained throughout
all this klugery.
Report: <
571276DD.
5050303@dalibo.com>
Tom Lane [Mon, 18 Apr 2016 17:33:07 +0000 (13:33 -0400)]
Further reduce the number of semaphores used under --disable-spinlocks.
Per discussion, there doesn't seem to be much value in having
NUM_SPINLOCK_SEMAPHORES set to 1024: under any scenario where you are
running more than a few backends concurrently, you really had better have a
real spinlock implementation if you want tolerable performance. And 1024
semaphores is a sizable fraction of the system-wide SysV semaphore limit
on many platforms. Therefore, reduce this setting's default value to 128
to make it less likely to cause out-of-semaphores problems.
Peter Eisentraut [Sat, 16 Apr 2016 00:44:10 +0000 (20:44 -0400)]
doc: Add missing parentheses
From: Alexander Law <exclusion@gmail.com>
Tom Lane [Fri, 15 Apr 2016 16:11:27 +0000 (12:11 -0400)]
Fix possible crash in ALTER TABLE ... REPLICA IDENTITY USING INDEX.
Careless coding added by commit
07cacba983ef79be could result in a crash
or a bizarre error message if someone tried to select an index on the
OID column as the replica identity index for a table. Back-patch to 9.4
where the feature was introduced.
Discussion: CAKJS1f8TQYgTRDyF1_u9PVCKWRWz+DkieH=U7954HeHVPJKaKg@mail.gmail.com
David Rowley
Tom Lane [Fri, 15 Apr 2016 04:02:26 +0000 (00:02 -0400)]
Fix memory leak in GIN index scans.
The code had a query-lifespan memory leak when encountering GIN entries
that have posting lists (rather than posting trees, ie, there are a
relatively small number of heap tuples containing this index key value).
With a suitable data distribution this could add up to a lot of leakage.
Problem seems to have been introduced by commit
36a35c550, so back-patch
to 9.4.
Julien Rouhaud
Andres Freund [Fri, 15 Apr 2016 01:54:06 +0000 (18:54 -0700)]
Remove trailing commas in enums.
These aren't valid C89. Found thanks to gcc's -Wc90-c99-compat. These
exist in differing places in most supported branches.
Tom Lane [Thu, 14 Apr 2016 23:42:22 +0000 (19:42 -0400)]
Fix core dump in ReorderBufferRestoreChange on alignment-picky platforms.
When re-reading an update involving both an old tuple and a new tuple from
disk, reorderbuffer.c was careless about whether the new tuple is suitably
aligned for direct access --- in general, it isn't. We'd missed seeing
this in the buildfarm because the contrib/test_decoding tests exercise this
code path only a few times, and by chance all of those cases have old
tuples with length a multiple of 4, which is usually enough to make the
access to the new tuple's t_len safe. For some still-not-entirely-clear
reason, however, Debian's sparc build gets a bus error, as reported by
Christoph Berg; perhaps it's assuming 8-byte alignment of the pointer?
The lack of previous field reports is probably because you need all of
these conditions to trigger a crash: an alignment-picky platform (not
Intel), a transaction large enough to spill to disk, an update within
that xact that changes a primary-key field and has an odd-length old tuple,
and of course logical decoding tracing the transaction.
Avoid the alignment assumption by using memcpy instead of fetching t_len
directly, and add a test case that exposes the crash on picky platforms.
Back-patch to 9.4 where the bug was introduced.
Discussion: <
20160413094117.GC21485@msg.credativ.de>
Tom Lane [Thu, 14 Apr 2016 16:18:09 +0000 (12:18 -0400)]
Adjust datatype of ReplicationState.acquired_by.
It was declared as "pid_t", which would be fine except that none of
the places that printed it in error messages took any thought for the
possibility that it's not equivalent to "int". This leads to warnings
on some buildfarm members, and could possibly lead to actually wrong
error messages on those platforms. There doesn't seem to be any very
good reason not to just make it "int"; it's only ever assigned from
MyProcPid, which is int. If we want to cope with PIDs that are wider
than int, this is not the place to start.
Also, fix the comment, which seems to perhaps be a leftover from a time
when the field was only a bool?
Per buildfarm. Back-patch to 9.5 which has same issue.
Tom Lane [Wed, 13 Apr 2016 22:57:52 +0000 (18:57 -0400)]
Fix pg_dump so pg_upgrade'ing an extension with simple opfamilies works.
As reported by Michael Feld, pg_upgrade'ing an installation having
extensions with operator families that contain just a single operator class
failed to reproduce the extension membership of those operator families.
This caused no immediate ill effects, but would create problems when later
trying to do a plain dump and restore, because the seemingly-not-part-of-
the-extension operator families would appear separately in the pg_dump
output, and then would conflict with the families created by loading the
extension. This has been broken ever since extensions were introduced,
and many of the standard contrib extensions are affected, so it's a bit
astonishing nobody complained before.
The cause of the problem is a perhaps-ill-considered decision to omit
such operator families from pg_dump's output on the grounds that the
CREATE OPERATOR CLASS commands could recreate them, and having explicit
CREATE OPERATOR FAMILY commands would impede loading the dump script into
pre-8.3 servers. Whatever the merits of that decision when 8.3 was being
written, it looks like a poor tradeoff now. We can fix the pg_upgrade
problem simply by removing that code, so that the operator families are
dumped explicitly (and then will be properly made to be part of their
extensions).
Although this fixes the behavior of future pg_upgrade runs, it does nothing
to clean up existing installations that may have improperly-linked operator
families. Given the small number of complaints to date, maybe we don't
need to worry about providing an automated solution for that; anyone who
needs to clean it up can do so with manual "ALTER EXTENSION ADD OPERATOR
FAMILY" commands, or even just ignore the duplicate-opfamily errors they
get during a pg_restore. In any case we need this fix.
Back-patch to all supported branches.
Discussion: <20228.
1460575691@sss.pgh.pa.us>
Tom Lane [Tue, 12 Apr 2016 00:07:17 +0000 (20:07 -0400)]
Fix _SPI_execute_plan() for CREATE TABLE IF NOT EXISTS foo AS ...
When IF NOT EXISTS was added to CREATE TABLE AS, this logic didn't get
the memo, possibly resulting in an Assert failure. It looks like there
would have been no ill effects in a non-Assert build, though. Back-patch
to 9.5 where the IF NOT EXISTS option was added.
Stas Kelvich
Tom Lane [Mon, 11 Apr 2016 22:17:02 +0000 (18:17 -0400)]
Fix freshly-introduced PL/Python portability bug.
It turns out that those PyErr_Clear() calls I removed from plpy_elog.c
in
7e3bb080387f4143 et al were not quite as random as they appeared: they
mask a Python 2.3.x bug. (Specifically, it turns out that PyType_Ready()
can fail if the error indicator is set on entry, and PLy_traceback's fetch
of frame.f_code may be the first operation in a session that requires the
"frame" type to be readied. Ick.) Put back the clear call, but in a more
centralized place closer to what it's protecting, and this time with a
comment warning what it's really for.
Per buildfarm member prairiedog. Although prairiedog was only failing
on HEAD, it seems clearly possible for this to occur in older branches
as well, so back-patch to 9.2 the same as the previous patch.
Tom Lane [Mon, 11 Apr 2016 03:15:55 +0000 (23:15 -0400)]
Fix access-to-already-freed-memory issue in plpython's error handling.
PLy_elog() could attempt to access strings that Python had already freed,
because the strings that PLy_get_spi_error_data() returns are simply
pointers into storage associated with the error "val" PyObject. That's
fine at the instant PLy_get_spi_error_data() returns them, but just after
that PLy_traceback() intentionally releases the only refcount on that
object, allowing it to be freed --- so that the strings we pass to
ereport() are dangling pointers.
In principle this could result in garbage output or a coredump. In
practice, I think the risk is pretty low, because there are no Python
operations between where we decrement that refcount and where we use the
strings (and copy them into PG storage), and thus no reason for Python
to recycle the storage. Still, it's clearly hazardous, and it leads to
Valgrind complaints when running under a Valgrind that hasn't been
lobotomized to ignore Python memory allocations.
The code was a mess anyway: we fetched the error data out of Python
(clearing Python's error indicator) with PyErr_Fetch, examined it, pushed
it back into Python with PyErr_Restore (re-setting the error indicator),
then immediately pulled it back out with another PyErr_Fetch. Just to
confuse matters even more, there were some gratuitous-and-yet-hazardous
PyErr_Clear calls in the "examine" step, and we didn't get around to doing
PyErr_NormalizeException until after the second PyErr_Fetch, making it even
less clear which object was being manipulated where and whether we still
had a refcount on it. (If PyErr_NormalizeException did substitute a
different "val" object, it's possible that the problem could manifest for
real, because then we'd be doing assorted Python stuff with no refcount
on the object we have string pointers into.)
So, rearrange all that into some semblance of sanity, and don't decrement
the refcount on the Python error objects until the end of PLy_elog().
In HEAD, I failed to resist the temptation to reformat some messy bits
from
5c3c3cd0a3046339 along the way.
Back-patch as far as 9.2, because the code is substantially the same
that far back. I believe that 9.1 has the bug as well; but the code
around it is rather different and I don't want to take a chance on
breaking something for what seems a low-probability problem.
Teodor Sigaev [Fri, 8 Apr 2016 18:25:32 +0000 (21:25 +0300)]
Fix possible use of uninitialised value in ts_headline()
Found during investigation of failure of skink buildfarm member and its
valgrind report.
Backpatch to all supported branches
Andrew Dunstan [Fri, 8 Apr 2016 16:25:10 +0000 (12:25 -0400)]
Turn down MSVC compiler verbosity
Most of what is produced by the detailed verbosity level is of no
interest at all, so switch to the normal level for more usable output.
Christian Ullrich
Backpatch to all live branches
Tom Lane [Fri, 8 Apr 2016 16:31:42 +0000 (12:31 -0400)]
Fix multiple bugs in tablespace symlink removal.
Don't try to examine S_ISLNK(st.st_mode) after a failed lstat().
It's undefined.
Also, if the lstat() reported ENOENT, we do not wish that to be a hard
error, but the code might nonetheless treat it as one (giving an entirely
misleading error message, too) depending on luck-of-the-draw as to what
S_ISLNK() returned.
Don't throw error for ENOENT from rmdir(), either. (We're not really
expecting ENOENT because we just stat'd the file successfully; but
if we're going to allow ENOENT in the symlink code path, surely the
directory code path should too.)
Generate an appropriate errcode for its-the-wrong-type-of-file complaints.
(ERRCODE_SYSTEM_ERROR doesn't seem appropriate, and failing to write
errcode() around it certainly doesn't work, and not writing an errcode
at all is not per project policy.)
Valgrind noticed the undefined S_ISLNK result; the other problems emerged
while reading the code in the area.
All of this appears to have been introduced in
8f15f74a44f68f9c.
Back-patch to 9.5 where that commit appeared.
Alvaro Herrera [Tue, 5 Apr 2016 22:03:42 +0000 (19:03 -0300)]
Fix broken ALTER INDEX documentation
Commit
b8a91d9d1c put the description of the new IF EXISTS clause in the
wrong place -- move it where it belongs.
Backpatch to 9.2.
Tom Lane [Mon, 4 Apr 2016 22:05:23 +0000 (18:05 -0400)]
Disallow newlines in parameter values to be set in ALTER SYSTEM.
As noted by Julian Schauder in bug #14063, the configuration-file parser
doesn't support embedded newlines in string literals. While there might
someday be a good reason to remove that restriction, there doesn't seem
to be one right now. However, ALTER SYSTEM SET could accept strings
containing newlines, since many of the variable-specific value-checking
routines would just see a newline as whitespace. This led to writing a
postgresql.auto.conf file that was broken and had to be removed manually.
Pending a reason to work harder, just throw an error if someone tries this.
In passing, fix several places in the ALTER SYSTEM logic that failed to
provide an errcode() for an ereport(), and thus would falsely log the
failure as an internal XX000 error.
Back-patch to 9.4 where ALTER SYSTEM was introduced.
Tom Lane [Mon, 4 Apr 2016 15:13:17 +0000 (11:13 -0400)]
Fix latent portability issue in pgwin32_dispatch_queued_signals().
The first iteration of the signal-checking loop would compute sigmask(0)
which expands to 1<<(-1) which is undefined behavior according to the
C standard. The lack of field reports of trouble suggest that it
evaluates to 0 on all existing Windows compilers, but that's hardly
something to rely on. Since signal 0 isn't a queueable signal anyway,
we can just make the loop iterate from 1 instead, and save a few cycles
as well as avoiding the undefined behavior.
In passing, avoid evaluating the volatile expression UNBLOCKED_SIGNAL_QUEUE
twice in a row; there's no reason to waste cycles like that.
Noted by Aleksander Alekseev, though this isn't his proposed fix.
Back-patch to all supported branches.
Alvaro Herrera [Thu, 31 Mar 2016 02:39:15 +0000 (23:39 -0300)]
Fix broken variable declaration
Author: Konstantin Knizhnik
Tom Lane [Wed, 30 Mar 2016 01:38:14 +0000 (21:38 -0400)]
Remove TZ environment-variable entry from postgres reference page.
The server hasn't paid attention to the TZ environment variable since
commit
ca4af308c32d03db, but that commit missed removing this documentation
reference, as did commit
d883b916a947a3c6 which added the reference where
it now belongs (initdb).
Back-patch to 9.2 where the behavior changed. Also back-patch
d883b916a947a3c6 as needed.
Matthew Somerville
Robert Haas [Tue, 29 Mar 2016 17:46:57 +0000 (13:46 -0400)]
Fix pgbench documentation error.
The description of what the per-transaction log file says for skipped
transactions is just plain wrong.
Report and patch by Tomas Vondra, reviewed by Fabien Coelho and
modified by me.
Tom Lane [Tue, 29 Mar 2016 15:54:57 +0000 (11:54 -0400)]
Avoid possibly-unsafe use of Windows' FormatMessage() function.
Whenever this function is used with the FORMAT_MESSAGE_FROM_SYSTEM flag,
it's good practice to include FORMAT_MESSAGE_IGNORE_INSERTS as well.
Otherwise, if the message contains any %n insertion markers, the function
will try to fetch argument strings to substitute --- which we are not
passing, possibly leading to a crash. This is exactly analogous to the
rule about not giving printf() a format string you're not in control of.
Noted and patched by Christian Ullrich.
Back-patch to all supported branches.
Alvaro Herrera [Mon, 28 Mar 2016 22:11:12 +0000 (19:11 -0300)]
Mention BRIN as able to do multi-column indexes
Documentation mentioned B-tree, GiST and GIN as able to do multicolumn
indexes; I failed to add BRIN to the list.
Author: Petr Jediný
Reviewed-By: Fujii Masao, Emre Hasegeli
Tom Lane [Mon, 28 Mar 2016 20:07:39 +0000 (16:07 -0400)]
Stamp 9.5.2.
Tom Lane [Mon, 28 Mar 2016 15:32:17 +0000 (11:32 -0400)]
Last-minute updates for release notes.
Security: CVE-2016-2193, CVE-2016-3065
Alvaro Herrera [Mon, 28 Mar 2016 13:57:42 +0000 (10:57 -0300)]
Add missing checks to some of pageinspect's BRIN functions
brin_page_type() and brin_metapage_info() did not enforce being called
by superuser, like other pageinspect functions that take bytea do.
Since they don't verify the passed page thoroughly, it is possible to
use them to read the server memory with a carefully crafted bytea value,
up to a file kilobytes from where the input bytea is located.
Have them throw errors if called by a non-superuser.
Report and initial patch: Andreas Seltenreich
Security: CVE-2016-3065
Stephen Frost [Mon, 28 Mar 2016 13:03:41 +0000 (09:03 -0400)]
Reset plan->row_security_env and planUserId
In the plancache, we check if the environment we planned the query under
has changed in a way which requires us to re-plan, such as when the user
for whom the plan was prepared changes and RLS is being used (and,
therefore, there may be different policies to apply).
Unfortunately, while those values were set and checked, they were not
being reset when the query was re-planned and therefore, in cases where
we change role, re-plan, and then change role again, we weren't
re-planning again. This leads to potentially incorrect policies being
applied in cases where role-specific policies are used and a given query
is planned under one role and then executed under other roles, which
could happen under security definer functions or when a common user and
query is planned initially and then re-used across multiple SET ROLEs.
Further, extensions which made use of CopyCachedPlan() may suffer from
similar issues as the RLS-related fields were not properly copied as
part of the plan and therefore RevalidateCachedQuery() would copy in the
current settings without invalidating the query.
Fix by using the same approach used for 'search_path', where we set the
correct values in CompleteCachedPlan(), check them early on in
RevalidateCachedQuery() and then properly reset them if re-planning.
Also, copy through the values during CopyCachedPlan().
Pointed out by Ashutosh Bapat. Reviewed by Michael Paquier.
Back-patch to 9.5 where RLS was introduced.
Security: CVE-2016-2193
Peter Eisentraut [Mon, 28 Mar 2016 06:44:53 +0000 (08:44 +0200)]
Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash:
0ffb9ae13cb7e2a9480ed8ee34071074bd80a7aa
Tom Lane [Sun, 27 Mar 2016 23:26:26 +0000 (19:26 -0400)]
Release notes for 9.5.2, 9.4.7, 9.3.12, 9.2.16, 9.1.21.
Andres Freund [Sun, 27 Mar 2016 21:46:25 +0000 (23:46 +0200)]
pg_rewind: fsync target data directory.
Previously pg_rewind did not fsync any files. That's problematic, given
that the target directory is modified. If the database was started
afterwards,
2ce439f33 luckily already caused the data directory to be
synced to disk at postmaster startup; reducing the scope of the problem.
To fix, use initdb -S, at the end of the pg_rewind run. It doesn't seem
worthwhile to duplicate the code into pg_rewind, and initdb -S is
already used that way by pg_upgrade.
Reported-By: Andres Freund
Author: Michael Paquier, somewhat edited by me
Discussion:
20160310034352.iuqgvpmg5qmnxtkz@alap3.anarazel.de
CAB7nPqSytVG1o4S3S2pA1O=692ekurJ+fckW2PywEG3sNw54Ow@mail.gmail.com
Backpatch: 9.5, where pg_rewind was introduced
Andres Freund [Sun, 27 Mar 2016 20:48:31 +0000 (22:48 +0200)]
pg_rewind: Close backup_label file descriptor.
This was a relatively harmless leak, as createBackupLabel() is only
called once per pg_rewind invocation.
Author: Michael Paquier
Reported-By: Michael Paquier
Discussion: CAB7nPqRnOw30gOXe2_SPLjh37bgm4V+txbYAPwoXb97nGQ297w@mail.gmail.com
Backpatch: 9.5, where pg_rewind was introduced
Andres Freund [Sun, 27 Mar 2016 15:47:55 +0000 (17:47 +0200)]
Change various Gin*Is* macros to return 0/1.
Returning the direct result of bit arithmetic, in a macro intended to be
used in a boolean manner, can be problematic if the return value is
stored in a variable of type 'bool'. If bool is implemented using C99's
_Bool, that can lead to comparison failures if the variable is then
compared again with the expression (see ginStepRight() for an example
that fails), as _Bool forces the result to be 0/1. That happens in some
configurations of newer MSVC compilers. It's also problematic when
storing the result of such an expression in a narrower type.
Several gin macros have been declared in that style since gin's initial
commit in
8a3631f8d86.
There's a lot more macros like this, but this is the only one causing
regression test failures; and I don't want to commit and backpatch a
larger patch with lots of conflicts just before the next set of minor
releases.
Discussion:
20150811154237.GD17575@awork2.anarazel.de
Backpatch: All supported branches