Michael Paquier [Thu, 3 Jun 2021 02:51:47 +0000 (11:51 +0900)]
Ignore more environment variables in TAP tests
Various environment variables were not getting reset in the TAP tests,
which would cause failures depending on the tests or the environment
variables involved. For example, PGSSL{MAX,MIN}PROTOCOLVERSION could
cause failures in the SSL tests. Even worse, a junk value of
PGCLIENTENCODING makes a server startup fail. The list of variables
reset is adjusted in each stable branch depending on what is supported.
While on it, simplify a bit the code per a suggestion from Andrew
Dunstan, using a list of variables instead of doing single deletions.
Reviewed-by: Andrew Dunstan, Daniel Gustafsson
Discussion: https://postgr.es/m/YLbjjRpucIeZ78VQ@paquier.xyz
Backpatch-through: 9.6
Tom Lane [Wed, 2 Jun 2021 18:38:14 +0000 (14:38 -0400)]
Fix planner's row-mark code for inheritance from a foreign table.
Commit
428b260f8 broke planning of cases where row marks are needed
(SELECT FOR UPDATE, etc) and one of the query's tables is a foreign
table that has regular table(s) as inheritance children. We got the
reverse case right, but apparently were thinking that foreign tables
couldn't be inheritance parents. Not so; so we need to be able to
add a CTID junk column while adding a new child, not only a wholerow
junk column.
Back-patch to v12 where the faulty code came in.
Amit Langote
Discussion: https://postgr.es/m/CA+HiwqEmo3FV1LAQ4TVyS2h1WM=kMkZUmbNuZSCnfHvMcUcPeA@mail.gmail.com
Tom Lane [Tue, 1 Jun 2021 15:12:56 +0000 (11:12 -0400)]
Reject SELECT ... GROUP BY GROUPING SETS (()) FOR UPDATE.
This case should be disallowed, just as FOR UPDATE with a plain
GROUP BY is disallowed; FOR UPDATE only makes sense when each row
of the query result can be identified with a single table row.
However, we missed teaching CheckSelectLocking() to check
groupingSets as well as groupClause, so that it would allow
degenerate grouping sets. That resulted in a bad plan and
a null-pointer dereference in the executor.
Looking around for other instances of the same bug, the only one
I found was in examine_simple_variable(). That'd just lead to
silly estimates, but it should be fixed too.
Per private report from Yaoguang Chen.
Back-patch to all supported branches.
Michael Paquier [Tue, 1 Jun 2021 00:27:31 +0000 (09:27 +0900)]
Add fallback implementation for setenv()
This fixes the code compilation on Windows with MSVC and Kerberos, as
a missing implementation of setenv() causes a compilation failure of the
GSSAPI code. This was only reproducible when building the code with
Kerberos, something that buildfarm animal hamerkop has fixed recently.
This issue only happens on 12 and 13, as this code has been introduced
in
b0b39f7. HEAD is already able to compile properly thanks to
7ca37fb0, and this commit is a minimal cherry-pick of it.
Thanks to Tom Lane for the discussion.
Discussion: https://postgr.es/m/YLDtm5WGjPxm6ua4@paquier.xyz
Backpatch-through: 12
Tom Lane [Mon, 31 May 2021 16:03:00 +0000 (12:03 -0400)]
Fix mis-planning of repeated application of a projection.
create_projection_plan contains a hidden assumption (here made
explicit by an Assert) that a projection-capable Path will yield a
projection-capable Plan. Unfortunately, that assumption is violated
only a few lines away, by create_projection_plan itself. This means
that two stacked ProjectionPaths can yield an outcome where we try to
jam the upper path's tlist into a non-projection-capable child node,
resulting in an invalid plan.
There isn't any good reason to have stacked ProjectionPaths; indeed the
whole concept is faulty, since the set of Vars/Aggs/etc needed by the
upper one wouldn't necessarily be available in the output of the lower
one, nor could the lower one create such values if they weren't
available from its input. Hence, we can fix this by adjusting
create_projection_path to strip any top-level ProjectionPath from the
subpath it's given. (This amounts to saying "oh, we changed our
minds about what we need to project here".)
The test case added here only fails in v13 and HEAD; before that, we
don't attempt to shove the Sort into the parallel part of the plan,
for reasons that aren't entirely clear to me. However, all the
directly-related code looks generally the same as far back as v11,
where the hazard was introduced (by
d7c19e62a). So I've got no faith
that the same type of bug doesn't exist in v11 and v12, given the
right test case. Hence, back-patch the code changes, but not the
irrelevant test case, into those branches.
Per report from Bas Poot.
Discussion: https://postgr.es/m/
534fca83789c4a378c7de379e9067d4f@politie.nl
Noah Misch [Mon, 31 May 2021 07:29:58 +0000 (00:29 -0700)]
Raise a timeout to 180s, in test 010_logical_decoding_timelines.pl.
Per buildfarm member hornet. Also, update Pod documentation showing the
lower value. Back-patch to v10, where the test first appeared.
Thomas Munro [Sat, 29 May 2021 02:48:15 +0000 (14:48 +1200)]
Fix race condition when sharing tuple descriptors.
Parallel query processes that called BlessTupleDesc() for identical
tuple descriptors at the same moment could crash. There was code to
handle that rare case, but it dereferenced a bogus DSA pointer. Repair.
Back-patch to 11, where commit
cc5f8136 added support for sharing tuple
descriptors in parallel queries.
Reported-by: Eric Thinnes <e.thinnes@gmx.de>
Discussion: https://postgr.es/m/
99aaa2eb-e194-bf07-c29a-
1a76b4f2bcf9%40gmx.de
Andrew Dunstan [Fri, 28 May 2021 13:35:11 +0000 (09:35 -0400)]
fix syntax error
Andrew Dunstan [Fri, 28 May 2021 13:26:30 +0000 (09:26 -0400)]
Report configured port in MSVC built pg_config
This is a long standing omission, discovered when trying to write code
that relied on it.
Backpatch to all live branches.
Michael Paquier [Thu, 27 May 2021 11:11:24 +0000 (20:11 +0900)]
Fix MSVC scripts when building with GSSAPI/Kerberos
The deliverables of upstream Kerberos on Windows are installed with
paths that do not match our MSVC scripts. First, the include folder was
named "inc/" in our scripts, but the upstream MSIs use "include/".
Second, the build would fail with 64-bit environments as the libraries
are named differently.
This commit adjusts the MSVC scripts to be compatible with the latest
installations of upstream, and I have checked that the compilation was
able to work with the 32-bit and 64-bit installations.
Special thanks to Kondo Yuta for the help in investigating the situation
in hamerkop, which had an incorrect configuration for the GSS
compilation.
Reported-by: Brian Ye
Discussion: https://postgr.es/m/
162128202219.27274.
12616756784952017465@wrigleys.postgresql.org
Backpatch-through: 9.6
Michael Paquier [Thu, 27 May 2021 05:58:13 +0000 (14:58 +0900)]
doc: Fix description of some GUCs in docs and postgresql.conf.sample
The following parameters have been imprecise, or incorrect, about their
description (PGC_POSTMASTER or PGC_SIGHUP):
- autovacuum_work_mem (docs, as of 9.6~)
- huge_page_size (docs, as of 14~)
- max_logical_replication_workers (docs, as of 10~)
- max_sync_workers_per_subscription (docs, as of 10~)
- min_dynamic_shared_memory (docs, as of 14~)
- recovery_init_sync_method (postgresql.conf.sample, as of 14~)
- remove_temp_files_after_crash (docs, as of 14~)
- restart_after_crash (docs, as of 9.6~)
- ssl_min_protocol_version (docs, as of 12~)
- ssl_max_protocol_version (docs, as of 12~)
This commit adjusts the description of all these parameters to be more
consistent with the practice used for the others.
Revewed-by: Justin Pryzby
Discussion: https://postgr.es/m/YK2ltuLpe+FbRXzA@paquier.xyz
Backpatch-through: 9.6
Michael Paquier [Tue, 25 May 2021 01:11:17 +0000 (10:11 +0900)]
Disallow SSL renegotiation
SSL renegotiation is already disabled as of
48d23c72, however this does
not prevent the server to comply with a client willing to use
renegotiation. In the last couple of years, renegotiation had its set
of security issues and flaws (like the recent CVE-2021-3449), and it
could be possible to crash the backend with a client attempting
renegotiation.
This commit takes one extra step by disabling renegotiation in the
backend in the same way as SSL compression (
f9264d15) or tickets
(
97d3a0b0). OpenSSL 1.1.0h has added an option named
SSL_OP_NO_RENEGOTIATION able to achieve that. In older versions
there is an option called SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS that
was undocumented, and could be set within the SSL object created when
the TLS connection opens, but I have decided not to use it, as it feels
trickier to rely on, and it is not official. Note that this option is
not usable in OpenSSL < 1.1.0h as the internal contents of the *SSL
object are hidden to applications.
SSL renegotiation concerns protocols up to TLSv1.2.
Per original report from Robert Haas, with a patch based on a suggestion
by Andres Freund.
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/YKZBXx7RhU74FlTE@paquier.xyz
Backpatch-through: 9.6
Tom Lane [Fri, 21 May 2021 19:12:08 +0000 (15:12 -0400)]
Disallow whole-row variables in GENERATED expressions.
This was previously allowed, but I think that was just an oversight.
It's a clear violation of the rule that a generated column cannot
depend on itself or other generated columns. Moreover, because the
code was relying on the assumption that no such cross-references
exist, it was pretty easy to crash ALTER TABLE and perhaps other
places. Even if you managed not to crash, you got quite unstable,
implementation-dependent results.
Per report from Vitaly Ustinov.
Back-patch to v12 where GENERATED came in.
Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
Tom Lane [Fri, 21 May 2021 19:02:07 +0000 (15:02 -0400)]
Fix usage of "tableoid" in GENERATED expressions.
We consider this supported (though I've got my doubts that it's a
good idea, because tableoid is not immutable). However, several
code paths failed to fill the field in soon enough, causing such
a GENERATED expression to see zero or the wrong value. This
occurred when ALTER TABLE adds a new GENERATED column to a table
with existing rows, and during regular INSERT or UPDATE on a
foreign table with GENERATED columns.
Noted during investigation of a report from Vitaly Ustinov.
Back-patch to v12 where GENERATED came in.
Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
Tom Lane [Fri, 21 May 2021 18:03:53 +0000 (14:03 -0400)]
Restore the portal-level snapshot after procedure COMMIT/ROLLBACK.
COMMIT/ROLLBACK necessarily destroys all snapshots within the session.
The original implementation of intra-procedure transactions just
cavalierly did that, ignoring the fact that this left us executing in
a rather different environment than normal. In particular, it turns
out that handling of toasted datums depends rather critically on there
being an outer ActiveSnapshot: otherwise, when SPI or the core
executor pop whatever snapshot they used and return, it's unsafe to
dereference any toasted datums that may appear in the query result.
It's possible to demonstrate "no known snapshots" and "missing chunk
number N for toast value" errors as a result of this oversight.
Historically this outer snapshot has been held by the Portal code,
and that seems like a good plan to preserve. So add infrastructure
to pquery.c to allow re-establishing the Portal-owned snapshot if it's
not there anymore, and add enough bookkeeping support that we can tell
whether it is or not.
We can't, however, just re-establish the Portal snapshot as part of
COMMIT/ROLLBACK. As in normal transaction start, acquiring the first
snapshot should wait until after SET and LOCK commands. Hence, teach
spi.c about doing this at the right time. (Note that this patch
doesn't fix the problem for any PLs that try to run intra-procedure
transactions without using SPI to execute SQL commands.)
This makes SPI's no_snapshots parameter rather a misnomer, so in HEAD,
rename that to allow_nonatomic.
replication/logical/worker.c also needs some fixes, because it wasn't
careful to hold a snapshot open around AFTER trigger execution.
That code doesn't use a Portal, which I suspect someday we're gonna
have to fix. But for now, just rearrange the order of operations.
This includes back-patching the recent addition of finish_estate()
to centralize the cleanup logic there.
This also back-patches commit
2ecfeda3e into v13, to improve the
test coverage for worker.c (it was that test that exposed that
worker.c's snapshot management is wrong).
Per bug #15990 from Andreas Wicht. Back-patch to v11 where
intra-procedure COMMIT was added.
Discussion: https://postgr.es/m/15990-
eee2ac466b11293d@postgresql.org
Amit Kapila [Fri, 21 May 2021 02:47:25 +0000 (08:17 +0530)]
Fix deadlock for multiple replicating truncates of the same table.
While applying the truncate change, the logical apply worker acquires
RowExclusiveLock on the relation being truncated. This allowed truncate on
the relation at a time by two apply workers which lead to a deadlock. The
reason was that one of the workers after updating the pg_class tuple tries
to acquire SHARE lock on the relation and started to wait for the second
worker which has acquired RowExclusiveLock on the relation. And when the
second worker tries to update the pg_class tuple, it starts to wait for
the first worker which leads to a deadlock. Fix it by acquiring
AccessExclusiveLock on the relation before applying the truncate change as
we do for normal truncate operation.
Author: Peter Smith, test case by Haiying Tang
Reviewed-by: Dilip Kumar, Amit Kapila
Backpatch-through: 11
Discussion: https://postgr.es/m/CAHut+PsNm43p0jM+idTvWwiGZPcP0hGrHMPK9TOAkc+a4UpUqw@mail.gmail.com
Tom Lane [Thu, 20 May 2021 22:32:37 +0000 (18:32 -0400)]
Avoid detoasting failure after COMMIT inside a plpgsql FOR loop.
exec_for_query() normally tries to prefetch a few rows at a time
from the query being iterated over, so as to reduce executor
entry/exit overhead. Unfortunately this is unsafe if we have
COMMIT or ROLLBACK within the loop, because there might be
TOAST references in the data that we prefetched but haven't
yet examined. Immediately after the COMMIT/ROLLBACK, we have
no snapshots in the session, meaning that VACUUM is at liberty
to remove recently-deleted TOAST rows.
This was originally reported as a case triggering the "no known
snapshots" error in init_toast_snapshot(), but even if you miss
hitting that, you can get "missing toast chunk", as illustrated
by the added isolation test case.
To fix, just disable prefetching in non-atomic contexts. Maybe
there will be performance complaints prompting us to work harder
later, but it's not clear at the moment that this really costs
much, and I doubt we'd want to back-patch any complicated fix.
In passing, adjust that error message in init_toast_snapshot()
to be a little clearer about the likely cause of the problem.
Patch by me, based on earlier investigation by Konstantin Knizhnik.
Per bug #15990 from Andreas Wicht. Back-patch to v11 where
intra-procedure COMMIT was added.
Discussion: https://postgr.es/m/15990-
eee2ac466b11293d@postgresql.org
Tom Lane [Thu, 20 May 2021 17:03:09 +0000 (13:03 -0400)]
Clean up cpluspluscheck violation.
"typename" is a C++ keyword, so pg_upgrade.h fails to compile in C++.
Fortunately, there seems no likely reason for somebody to need to
do that. Nonetheless, it's project policy that all .h files should
pass cpluspluscheck, so rename the argument to fix that.
Oversight in
57c081de0; back-patch as that was. (The policy requiring
pg_upgrade.h to pass cpluspluscheck only goes back to v12, but it
seems best to keep this code looking the same in all branches.)
David Rowley [Mon, 17 May 2021 21:56:31 +0000 (09:56 +1200)]
Fix typo and outdated information in README.barrier
README.barrier didn't seem to get the memo when atomics were added. Fix
that.
Author: Tatsuo Ishii, David Rowley
Discussion: https://postgr.es/m/
20210516.211133.
2159010194908437625.t-ishii%40sraoss.co.jp
Backpatch-through: 9.6, oldest supported release
Tom Lane [Sat, 15 May 2021 16:21:06 +0000 (12:21 -0400)]
Be more careful about barriers when releasing BackgroundWorkerSlots.
ForgetBackgroundWorker lacked any memory barrier at all, while
BackgroundWorkerStateChange had one but unaccountably did
additional manipulation of the slot after the barrier. AFAICS,
the rule must be that the barrier is immediately before setting
or clearing slot->in_use.
It looks like back in 9.6 when ForgetBackgroundWorker was first
written, there might have been some case for not needing a
barrier there, but I'm not very convinced of that --- the fact
that the load of bgw_notify_pid is in the caller doesn't seem
to guarantee no memory ordering problem. So patch 9.6 too.
It's likely that this doesn't fix any observable bug on Intel
hardware, but machines with weaker memory ordering rules could
have problems here.
Discussion: https://postgr.es/m/
4046084.
1620244003@sss.pgh.pa.us
Tom Lane [Fri, 14 May 2021 21:36:20 +0000 (17:36 -0400)]
Doc: correct erroneous entry in this week's minor release notes.
The patch to disallow a NULL specification in combination with
GENERATED ... AS IDENTITY applied to both ALWAYS and BY DEFAULT
variants of that clause, not only the former.
Noted by Shay Rojansky.
Discussion: https://postgr.es/m/CADT4RqAwD3A=RvGiQU9AiTK-6VeuXcycwPHmJPv_OBCJFYOEww@mail.gmail.com
Tom Lane [Fri, 14 May 2021 19:07:34 +0000 (15:07 -0400)]
Prevent infinite insertion loops in spgdoinsert().
Formerly we just relied on operator classes that assert longValuesOK
to eventually shorten the leaf value enough to fit on an index page.
That fails since the introduction of INCLUDE-column support (commit
09c1c6ab4), because the INCLUDE columns might alone take up more
than a page, meaning no amount of leaf-datum compaction will get
the job done. At least with spgtextproc.c, that leads to an infinite
loop, since spgtextproc.c won't throw an error for not being able
to shorten the leaf datum anymore.
To fix without breaking cases that would otherwise work, add logic
to spgdoinsert() to verify that the leaf tuple size is decreasing
after each "choose" step. Some opclasses might not decrease the
size on every single cycle, and in any case, alignment roundoff
of the tuple size could obscure small gains. Therefore, allow
up to 10 cycles without additional savings before throwing an
error. (Perhaps this number will need adjustment, but it seems
quite generous right now.)
As long as we've developed this logic, let's back-patch it.
The back branches don't have INCLUDE columns to worry about, but
this seems like a good defense against possible bugs in operator
classes. We already know that an infinite loop here is pretty
unpleasant, so having a defense seems to outweigh the risk of
breaking things. (Note that spgtextproc.c is actually the only
known opclass with longValuesOK support, so that this is all moot
for known non-core opclasses anyway.)
Per report from Dilip Kumar.
Discussion: https://postgr.es/m/CAFiTN-uxP_soPhVG840tRMQTBmtA_f_Y8N51G7DKYYqDh7XN-A@mail.gmail.com
Tom Lane [Fri, 14 May 2021 17:26:55 +0000 (13:26 -0400)]
Fix query-cancel handling in spgdoinsert().
Knowing that a buggy opclass could cause an infinite insertion loop,
spgdoinsert() intended to allow its loop to be interrupted by query
cancel. However, that never actually worked, because in iterations
after the first, we'd be holding buffer lock(s) which would cause
InterruptHoldoffCount to be positive, preventing servicing of the
interrupt.
To fix, check if an interrupt is pending, and if so fall out of
the insertion loop and service the interrupt after we've released
the buffers. If it was indeed a query cancel, that's the end of
the matter. If it was a non-canceling interrupt reason, make use
of the existing provision to retry the whole insertion. (This isn't
as wasteful as it might seem, since any upper-level index tuples we
already created should be usable in the next attempt.)
While there's no known instance of such a bug in existing release
branches, it still seems like a good idea to back-patch this to
all supported branches, since the behavior is fairly nasty if a
loop does happen --- not only is it uncancelable, but it will
quickly consume memory to the point of an OOM failure. In any
case, this code is certainly not working as intended.
Per report from Dilip Kumar.
Discussion: https://postgr.es/m/CAFiTN-uxP_soPhVG840tRMQTBmtA_f_Y8N51G7DKYYqDh7XN-A@mail.gmail.com
Tom Lane [Fri, 14 May 2021 16:54:26 +0000 (12:54 -0400)]
Refactor CHECK_FOR_INTERRUPTS() to add flexibility.
Split up CHECK_FOR_INTERRUPTS() to provide an additional macro
INTERRUPTS_PENDING_CONDITION(), which just tests whether an
interrupt is pending without attempting to service it. This is
useful in situations where the caller knows that interrupts are
blocked, and would like to find out if it's worth the trouble
to unblock them.
Also add INTERRUPTS_CAN_BE_PROCESSED(), which indicates whether
CHECK_FOR_INTERRUPTS() can be relied on to clear the pending interrupt.
This commit doesn't actually add any uses of the new macros,
but a follow-on bug fix will do so. Back-patch to all supported
branches to provide infrastructure for that fix.
Alvaro Herrera and Tom Lane
Discussion: https://postgr.es/m/
20210513155351.GA7848@alvherre.pgsql
Alexander Korotkov [Thu, 13 May 2021 13:10:21 +0000 (16:10 +0300)]
Improve documentation example for jsonpath like_regex operator
Make sample like_regex match string values of the root object instead of the
whole document. The corrected example seems to represent a more relevant
use case.
Backpatch to 12, when jsonpath was introduced.
Discussion: https://postgr.es/m/
13440f8b-4c1f-5875-c8e3-
f3f65606af2f%40xs4all.nl
Author: Erik Rijkers
Reviewed-by: Michael Paquier, Alexander Korotkov
Backpatch-through: 12
Alvaro Herrera [Wed, 12 May 2021 23:13:54 +0000 (19:13 -0400)]
Rename the logical replication global "wrconn"
The worker.c global wrconn is only meant to be used by logical apply/
tablesync workers, but there are other variables with the same name. To
reduce future confusion rename the global from "wrconn" to
"LogRepWorkerWalRcvConn".
While this is just cosmetic, it seems better to backpatch it all the way
back to 10 where this code appeared, to avoid future backpatching
issues.
Author: Peter Smith <smithpb2250@gmail.com>
Discussion: https://postgr.es/m/CAHut+Pu7Jv9L2BOEx_Z0UtJxfDevQSAUW2mJqWU+CtmDrEZVAg@mail.gmail.com
Tom Lane [Mon, 10 May 2021 20:43:52 +0000 (16:43 -0400)]
Stamp 12.7.
Tom Lane [Mon, 10 May 2021 17:10:29 +0000 (13:10 -0400)]
Last-minute updates for release notes.
Security: CVE-2021-32027, CVE-2021-32028, CVE-2021-32029
Tom Lane [Mon, 10 May 2021 15:02:29 +0000 (11:02 -0400)]
Fix mishandling of resjunk columns in ON CONFLICT ... UPDATE tlists.
It's unusual to have any resjunk columns in an ON CONFLICT ... UPDATE
list, but it can happen when MULTIEXPR_SUBLINK SubPlans are present.
If it happens, the ON CONFLICT UPDATE code path would end up storing
tuples that include the values of the extra resjunk columns. That's
fairly harmless in the short run, but if new columns are added to
the table then the values would become accessible, possibly leading
to malfunctions if they don't match the datatypes of the new columns.
This had escaped notice through a confluence of missing sanity checks,
including
* There's no cross-check that a tuple presented to heap_insert or
heap_update matches the table rowtype. While it's difficult to
check that fully at reasonable cost, we can easily add assertions
that there aren't too many columns.
* The output-column-assignment cases in execExprInterp.c lacked
any sanity checks on the output column numbers, which seems like
an oversight considering there are plenty of assertion checks on
input column numbers. Add assertions there too.
* We failed to apply nodeModifyTable's ExecCheckPlanOutput() to
the ON CONFLICT UPDATE tlist. That wouldn't have caught this
specific error, since that function is chartered to ignore resjunk
columns; but it sure seems like a bad omission now that we've seen
this bug.
In HEAD, the right way to fix this is to make the processing of
ON CONFLICT UPDATE tlists work the same as regular UPDATE tlists
now do, that is don't add "SET x = x" entries, and use
ExecBuildUpdateProjection to evaluate the tlist and combine it with
old values of the not-set columns. This adds a little complication
to ExecBuildUpdateProjection, but allows removal of a comparable
amount of now-dead code from the planner.
In the back branches, the most expedient solution seems to be to
(a) use an output slot for the ON CONFLICT UPDATE projection that
actually matches the target table, and then (b) invent a variant of
ExecBuildProjectionInfo that can be told to not store values resulting
from resjunk columns, so it doesn't try to store into nonexistent
columns of the output slot. (We can't simply ignore the resjunk columns
altogether; they have to be evaluated for MULTIEXPR_SUBLINK to work.)
This works back to v10. In 9.6, projections work much differently and
we can't cheaply give them such an option. The 9.6 version of this
patch works by inserting a JunkFilter when it's necessary to get rid
of resjunk columns.
In addition, v11 and up have the reverse problem when trying to
perform ON CONFLICT UPDATE on a partitioned table. Through a
further oversight, adjust_partition_tlist() discarded resjunk columns
when re-ordering the ON CONFLICT UPDATE tlist to match a partition.
This accidentally prevented the storing-bogus-tuples problem, but
at the cost that MULTIEXPR_SUBLINK cases didn't work, typically
crashing if more than one row has to be updated. Fix by preserving
resjunk columns in that routine. (I failed to resist the temptation
to add more assertions there too, and to do some minor code
beautification.)
Per report from Andres Freund. Back-patch to all supported branches.
Security: CVE-2021-32028
Tom Lane [Mon, 10 May 2021 14:44:38 +0000 (10:44 -0400)]
Prevent integer overflows in array subscripting calculations.
While we were (mostly) careful about ensuring that the dimensions of
arrays aren't large enough to cause integer overflow, the lower bound
values were generally not checked. This allows situations where
lower_bound + dimension overflows an integer. It seems that that's
harmless so far as array reading is concerned, except that array
elements with subscripts notionally exceeding INT_MAX are inaccessible.
However, it confuses various array-assignment logic, resulting in a
potential for memory stomps.
Fix by adding checks that array lower bounds aren't large enough to
cause lower_bound + dimension to overflow. (Note: this results in
disallowing cases where the last subscript position would be exactly
INT_MAX. In principle we could probably allow that, but there's a lot
of code that computes lower_bound + dimension and would need adjustment.
It seems doubtful that it's worth the trouble/risk to allow it.)
Somewhat independently of that, array_set_element() was careless
about possible overflow when checking the subscript of a fixed-length
array, creating a different route to memory stomps. Fix that too.
Security: CVE-2021-32027
Peter Eisentraut [Mon, 10 May 2021 12:30:04 +0000 (14:30 +0200)]
Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash:
7221ef1e0bfee1318f195b8faca683c0ffbee895
Tom Lane [Sun, 9 May 2021 17:31:40 +0000 (13:31 -0400)]
Release notes for 13.3, 12.7, 11.12, 10.17, 9.6.22.
Alvaro Herrera [Fri, 7 May 2021 15:46:37 +0000 (11:46 -0400)]
AlterSubscription_refresh: avoid stomping on global variable
This patch replaces use of the global "wrconn" variable in
AlterSubscription_refresh with a local variable of the same name, making
it consistent with other functions in subscriptioncmds.c (e.g.
DropSubscription).
The global wrconn is only meant to be used for logical apply/tablesync worker.
Abusing it this way is known to cause trouble if an apply worker
manages to do a subscription refresh, such as reported by Jeremy Finzel
and diagnosed by Andres Freund back in November 2020, at
https://www.postgresql.org/message-id/
20201111215820.qihhrz7fayu6myfi@alap3.anarazel.de
Backpatch to 10. In branch master, also move the connection establishment
to occur outside the PG_TRY block; this way we can remove a test for NULL in
PG_FINALLY, and it also makes the code more consistent with similar code in
the same file.
Author: Peter Smith <peter.b.smith@fujitsu.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/CAHut+Pu7Jv9L2BOEx_Z0UtJxfDevQSAUW2mJqWU+CtmDrEZVAg@mail.gmail.com
Alvaro Herrera [Thu, 6 May 2021 21:17:56 +0000 (17:17 -0400)]
Document lock level used by ALTER TABLE VALIDATE CONSTRAINT
Backpatch all the way back to 9.6.
Author: Simon Riggs <simon.riggs@enterprisedb.com>
Discussion: https://postgr.es/m/CANbhV-EwxvdhHuOLdfG2ciYrHOHXV=mm6=fD5aMhqcH09Li3Tg@mail.gmail.com
Alvaro Herrera [Wed, 5 May 2021 16:14:21 +0000 (12:14 -0400)]
Have ALTER CONSTRAINT recurse on partitioned tables
When ALTER TABLE .. ALTER CONSTRAINT changes deferrability properties
changed in a partitioned table, we failed to propagate those changes
correctly to partitions and to triggers. Repair by adding a recursion
mechanism to affect all derived constraints and all derived triggers.
(In particular, recurse to partitions even if their respective parents
are already in the desired state: it is possible for the partitions to
have been altered individually.) Because foreign keys involve tables in
two sides, we cannot use the standard ALTER TABLE recursion mechanism,
so we invent our own by following pg_constraint.conparentid down.
When ALTER TABLE .. ALTER CONSTRAINT is invoked on the derived
pg_constraint object that's automaticaly created in a partition as a
result of a constraint added to its parent, raise an error instead of
pretending to work and then failing to modify all the affected triggers.
Before this commit such a command would be allowed but failed to affect
all triggers, so it would silently misbehave. (Restoring dumps of
existing databases is not affected, because pg_dump does not produce
anything for such a derived constraint anyway.)
Add some tests for the case.
Backpatch to 11, where foreign key support was added to partitioned
tables by commit
3de241dba86f. (A related change is commit
f56f8f8da6af
in pg12 which added support for FKs *referencing* partitioned tables;
this is what forces us to use an ad-hoc recursion mechanism for this.)
Diagnosed by Tom Lane from bug report from Ron L Johnson. As of this
writing, no reviews were offered.
Discussion: https://postgr.es/m/
75fe0761-a291-86a9-c8d8-
4906da077469@gmail.com
Discussion: https://postgr.es/m/
3144850.
1607369633@sss.pgh.pa.us
Alvaro Herrera [Tue, 4 May 2021 14:09:11 +0000 (10:09 -0400)]
Fix OID passed to object-alter hook during ALTER CONSTRAINT
The OID of the constraint is used instead of the OID of the trigger --
an easy mistake to make. Apparently the object-alter hooks are not very
well tested :-(
Backpatch to 12, where this typo was introduced by
578b229718e8
Discussion: https://postgr.es/m/
20210503231633.GA6994@alvherre.pgsql
Peter Eisentraut [Tue, 4 May 2021 12:03:54 +0000 (14:03 +0200)]
pg_dump: Fix dump of generated columns in partitions
The previous fix for dumping of inherited generated columns
(
0bf83648a52df96f7c8677edbbdf141bfa0cf32b) must not be applied to
partitions, since, unlike normal inherited tables, they are always
dumped separately and reattached.
Reported-by: Santosh Udupi <email@hitha.net>
Discussion: https://www.postgresql.org/message-id/flat/CACLRvHZ4a-%2BSM_159%2BtcrHdEqxFrG%3DW4gwTRnwf7Oj0UNj5R2A%40mail.gmail.com
Peter Eisentraut [Tue, 4 May 2021 09:45:37 +0000 (11:45 +0200)]
Fix ALTER TABLE / INHERIT with generated columns
When running ALTER TABLE t2 INHERIT t1, we must check that columns in
t2 that correspond to a generated column in t1 are also generated and
have the same generation expression. Otherwise, this would allow
creating setups that a normal CREATE TABLE sequence would not allow.
Discussion: https://www.postgresql.org/message-id/
22de27f6-7096-8d96-4619-
7b882932ca25@2ndquadrant.com
Tom Lane [Fri, 30 Apr 2021 19:37:56 +0000 (15:37 -0400)]
Doc: add an example of a self-referential foreign key to ddl.sgml.
While we've always allowed such cases, the documentation didn't
say you could do it.
Discussion: https://postgr.es/m/
161969805833.690.
13680986983883602407@wrigleys.postgresql.org
Tom Lane [Fri, 30 Apr 2021 19:10:06 +0000 (15:10 -0400)]
Doc: update libpq's documentation for PQfn().
Mention specifically that you can't call aggregates, window functions,
or procedures this way (the inability to call SRFs was already
mentioned).
Also, the claim that PQfn doesn't support NULL arguments or results
has been a lie since we invented protocol 3.0. Not sure why this
text was never updated for that, but do it now.
Discussion: https://postgr.es/m/
2039442.
1615317309@sss.pgh.pa.us
Tom Lane [Fri, 30 Apr 2021 18:10:26 +0000 (14:10 -0400)]
Disallow calling anything but plain functions via the fastpath API.
Reject aggregates, window functions, and procedures. Aggregates
failed anyway, though with a somewhat obscure error message.
Window functions would hit an Assert or null-pointer dereference.
Procedures seemed to work as long as you didn't try to do
transaction control, but (a) transaction control is sort of the
point of a procedure, and (b) it's not entirely clear that no
bugs lurk in that path. Given the lack of testing of this area,
it seems safest to be conservative in what we support.
Also reject proretset functions, as the fastpath protocol can't
support returning a set.
Also remove an easily-triggered assertion that the given OID
isn't 0; the subsequent lookups can handle that case themselves.
Per report from Theodor-Arsenij Larionov-Trichkin.
Back-patch to all supported branches. (The procedure angle
only applies in v11+, of course.)
Discussion: https://postgr.es/m/
2039442.
1615317309@sss.pgh.pa.us
Tom Lane [Thu, 29 Apr 2021 19:24:37 +0000 (15:24 -0400)]
Fix some more omissions in pg_upgrade's tests for non-upgradable types.
Commits
29aeda6e4 et al closed up some oversights involving not checking
for non-upgradable types within container types, such as arrays and
ranges. However, I only looked at version.c, failing to notice that
there were substantially-equivalent tests in check.c. (The division
of responsibility between those files is less than clear...)
In addition, because genbki.pl does not guarantee that auto-generated
rowtype OIDs will hold still across versions, we need to consider that
the composite type associated with a system catalog or view is
non-upgradable. It seems unlikely that someone would have a user
column declared that way, but if they did, trying to read it in another
PG version would likely draw "no such pg_type OID" failures, thanks
to the type OID embedded in composite Datums.
To support the composite and reg*-type cases, extend the recursive
query that does the search to allow any base query that returns
a column of pg_type OIDs, rather than limiting it to exactly one
starting type.
As before, back-patch to all supported branches.
Discussion: https://postgr.es/m/
2798740.
1619622555@sss.pgh.pa.us
Alvaro Herrera [Thu, 29 Apr 2021 15:31:24 +0000 (11:31 -0400)]
Improve documentation for default_tablespace on partitioned tables
Backpatch to 12, where
87259588d0ab introduced the current behavior.
Per note from Justin Pryzby.
Co-authored-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/
20210416143135.GI3315@telsasoft.com
Tom Lane [Wed, 28 Apr 2021 14:03:28 +0000 (10:03 -0400)]
Doc: fix discussion of how to get real Julian Dates.
Somehow I'd convinced myself that rotating to UTC-12 was the way
to do this, but upon further review, it's definitely UTC+12.
Discussion: https://postgr.es/m/
1197050.
1619123213@sss.pgh.pa.us
Michael Paquier [Wed, 28 Apr 2021 02:58:46 +0000 (11:58 +0900)]
Fix use-after-release issue with pg_identify_object_as_address()
Spotted by buildfarm member prion, with -DRELCACHE_FORCE_RELEASE.
Introduced in
f7aab36.
Discussion: https://postgr.es/m/
2759018.
1619577848@sss.pgh.pa.us
Backpatch-through: 9.6
Michael Paquier [Wed, 28 Apr 2021 02:18:20 +0000 (11:18 +0900)]
Fix pg_identify_object_as_address() with event triggers
Attempting to use this function with event triggers failed, as, since
its introduction in
a676201, this code has never associated an object
name with event triggers. This addresses the failure by adding the
event trigger name to the set defining its object address.
Note that regression tests are added within event_trigger and not
object_address to avoid issues with concurrent connections in parallel
schedules.
Author: Joel Jacobson
Discussion: https://postgr.es/m/
3c905e77-a026-46ae-8835-
c3f6cd1d24c8@www.fastmail.com
Backpatch-through: 9.6
Tom Lane [Mon, 26 Apr 2021 15:50:35 +0000 (11:50 -0400)]
Doc: document EXTRACT(JULIAN ...), improve Julian Date explanation.
For some reason, the "julian" option for extract()/date_part() has
never gotten listed in the manual. Also, while Appendix B mentioned
in passing that we don't conform to the usual astronomical definition
that a Julian date starts at noon UTC, it was kind of vague about what
we do instead. Clarify that, and add an example showing how to get
the astronomical definition if you want it.
It's been like this for ages, so back-patch to all supported branches.
Discussion: https://postgr.es/m/
1197050.
1619123213@sss.pgh.pa.us
Fujii Masao [Fri, 23 Apr 2021 06:45:46 +0000 (15:45 +0900)]
doc: Fix obsolete description about pg_basebackup.
Previously it was documented that if using "-X none" option there was
no guarantee that all required WAL files were archived at the end of
pg_basebackup when taking a backup from the standby. But this limitation
was removed by commit
52f8a59dd9. Now, even when taking a backup
from the standby, pg_basebackup can wait for all required WAL files
to be archived. Therefore this commit removes such obsolete
description from the docs.
Also this commit adds new description about the limitation when
taking a backup from the standby, into the docs. The limitation is that
pg_basebackup cannot force the standbfy to switch to a new WAL file
at the end of backup, which may cause pg_basebackup to wait a long
time for the last required WAL file to be switched and archived,
especially when write activity on the primary is low.
Back-patch to v10 where the issue was introduced.
Reported-by: Kyotaro Horiguchi
Author: Kyotaro Horiguchi, Fujii Masao
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/
20210420.133235.
1342729068750553399.horikyota.ntt@gmail.com
Tom Lane [Thu, 22 Apr 2021 21:30:42 +0000 (17:30 -0400)]
Don't crash on reference to an un-available system column.
Adopt a more consistent policy about what slot-type-specific
getsysattr functions should do when system attributes are not
available. To wit, they should all throw the same user-oriented
error, rather than variously crashing or emitting developer-oriented
messages.
This closes a identifiable problem in commits
a71cfc56b and
3fb93103a (in v13 and v12), so back-patch into those branches,
along with a test case to try to ensure we don't break it again.
It is not known that any of the former crash cases are reachable
in HEAD, but this seems like a good safety improvement in any case.
Discussion: https://postgr.es/m/
141051591267657@mail.yandex.ru
Tom Lane [Thu, 22 Apr 2021 15:46:41 +0000 (11:46 -0400)]
Fix bugs in RETURNING in cross-partition UPDATE cases.
If the source and destination partitions don't have identical
rowtypes (for example, one has dropped columns the other lacks),
then the planSlot contents will be different because of that.
If the query has a RETURNING list that tries to return resjunk
columns out of the planSlot, that is columns from tables that
were joined to the target table, we'd get errors or wrong answers.
That's because we used the RETURNING list generated for the
destination partition, which expects a planSlot matching that
partition's subplan.
The most practical fix seems to be to convert the updated destination
tuple back to the source partition's rowtype, and then apply the
RETURNING list generated for the source partition. This avoids making
fragile assumptions about whether the per-subpartition subplans
generated all the resjunk columns in the same order.
This has been broken since v11 introduced cross-partition UPDATE.
The lack of field complaints shows that non-identical partitions
aren't a common case; therefore, don't stress too hard about
making the conversion efficient.
There's no such bug in HEAD, because commit
86dc90056 got rid of
per-target-relation variance in the contents of the planSlot.
Hence, patch v11-v13 only.
Amit Langote and Etsuro Fujita, small changes by me
Discussion: https://postgr.es/m/CA+HiwqE_UK1jTSNrjb8mpTdivzd3dum6mK--xqKq0Y9VmfwWQA@mail.gmail.com
Andrew Dunstan [Wed, 21 Apr 2021 15:12:04 +0000 (11:12 -0400)]
fix silly perl error in commit
d064afc720
Andrew Dunstan [Wed, 21 Apr 2021 14:21:22 +0000 (10:21 -0400)]
Only ever test for non-127.0.0.1 addresses on Windows in PostgresNode
This has been found to cause hangs where tcp usage is forced.
Alexey Kodratov
Discussion: https://postgr.es/m/
82e271a9a11928337fcb5b5e57b423c0@postgrespro.ru
Backpatch to all live branches
Magnus Hagander [Tue, 20 Apr 2021 12:35:16 +0000 (14:35 +0200)]
Fix typo in comment
Author: Julien Rouhaud
Backpatch-through: 11
Discussion: https://postgr.es/m/
20210420121659.odjueyd4rpilorn5@nol
Andrew Dunstan [Fri, 16 Apr 2021 20:54:04 +0000 (16:54 -0400)]
Allow TestLib::slurp_file to skip contents, and use as needed
In order to avoid getting old logfile contents certain functions in
PostgresNode were doing one of two things. On Windows it rotated the
logfile and restarted the server, while elsewhere it truncated the log
file. Both of these are unnecessary. We borrow from the buildfarm which
does this instead: note the size of the logfile before we start, and
then when fetching the logfile skip to that position before accumulating
contents. This is spelled differently on Windows but the effect is the
same. This is largely centralized in TestLib's slurp_file function,
which has a new optional parameter, the offset to skip to before
starting to reading the file. Code in the client becomes much neater.
Backpatch to all live branches.
Michael Paquier, slightly modified by me.
Discussion: https://postgr.es/m/YHajnhcMAI3++pJL@paquier.xyz
Michael Paquier [Fri, 16 Apr 2021 07:56:29 +0000 (16:56 +0900)]
doc: Fix typo in example query of SQL/JSON
Author: Erik Rijkers
Discussion: https://postgr.es/m/
1219476687.20432.
1617452918468@webmailclassic.xs4all.nl
Backpatch-through: 12
Tom Lane [Tue, 13 Apr 2021 19:10:18 +0000 (15:10 -0400)]
Fix some inappropriately-disallowed uses of ALTER ROLE/DATABASE SET.
Most GUC check hooks that inspect database state have special checks
that prevent them from throwing hard errors for state-dependent issues
when source == PGC_S_TEST. This allows, for example,
"ALTER DATABASE d SET default_text_search_config = foo" when the "foo"
configuration hasn't been created yet. Without this, we have problems
during dump/reload or pg_upgrade, because pg_dump has no idea about
possible dependencies of GUC values and can't ensure a safe restore
ordering.
However, check_role() and check_session_authorization() hadn't gotten
the memo about that, and would throw hard errors anyway. It's not
entirely clear what is the use-case for "ALTER ROLE x SET role = y",
but we've now heard two independent complaints about that bollixing
an upgrade, so apparently some people are doing it.
Hence, fix these two functions to act more like other check hooks
with similar needs. (But I did not change their insistence on
being inside a transaction, as it's still not apparent that setting
either GUC from the configuration file would be wise.)
Also fix check_temp_buffers, which had a different form of the disease
of making state-dependent checks without any exception for PGC_S_TEST.
A cursory survey of other GUC check hooks did not find any more issues
of this ilk. (There are a lot of interdependencies among
PGC_POSTMASTER and PGC_SIGHUP GUCs, which may be a bad idea, but
they're not relevant to the immediate concern because they can't be
set via ALTER ROLE/DATABASE.)
Per reports from Charlie Hornsby and Nathan Bossart. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/HE1P189MB0523B31598B0C772C908088DB7709@HE1P189MB0523.EURP189.PROD.OUTLOOK.COM
Discussion: https://postgr.es/m/
20160711223641.1426.86096@wrigleys.postgresql.org
Tom Lane [Tue, 13 Apr 2021 17:37:07 +0000 (13:37 -0400)]
Redesign the caching done by get_cached_rowtype().
Previously, get_cached_rowtype() cached a pointer to a reference-counted
tuple descriptor from the typcache, relying on the ExprContextCallback
mechanism to release the tupdesc refcount when the expression tree
using the tupdesc was destroyed. This worked fine when it was designed,
but the introduction of within-DO-block COMMITs broke it. The refcount
is logged in a transaction-lifespan resource owner, but plpgsql won't
destroy simple expressions made within the DO block (before its first
commit) until the DO block is exited. That results in a warning about
a leaked tupdesc refcount when the COMMIT destroys the original resource
owner, and then an error about the active resource owner not holding a
matching refcount when the expression is destroyed.
To fix, get rid of the need to have a shutdown callback at all, by
instead caching a pointer to the relevant typcache entry. Those
survive for the life of the backend, so we needn't worry about the
pointer becoming stale. (For registered RECORD types, we can still
cache a pointer to the tupdesc, knowing that it won't change for the
life of the backend.) This mechanism has been in use in plpgsql
and expandedrecord.c since commit
4b93f5799, and seems to work well.
This change requires modifying the ExprEvalStep structs used by the
relevant expression step types, which is slightly worrisome for
back-patching. However, there seems no good reason for extensions
to be familiar with the details of these particular sub-structs.
Per report from Rohit Bhogate. Back-patch to v11 where within-DO-block
COMMITs became a thing.
Discussion: https://postgr.es/m/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com
Tom Lane [Tue, 13 Apr 2021 16:17:24 +0000 (12:17 -0400)]
Avoid improbable PANIC during heap_update.
heap_update needs to clear any existing "all visible" flag on
the old tuple's page (and on the new page too, if different).
Per coding rules, to do this it must acquire pin on the appropriate
visibility-map page while not holding exclusive buffer lock;
which creates a race condition since someone else could set the
flag whenever we're not holding the buffer lock. The code is
supposed to handle that by re-checking the flag after acquiring
buffer lock and retrying if it became set. However, one code
path through heap_update itself, as well as one in its subroutine
RelationGetBufferForTuple, failed to do this. The end result,
in the unlikely event that a concurrent VACUUM did set the flag
while we're transiently not holding lock, is a non-recurring
"PANIC: wrong buffer passed to visibilitymap_clear" failure.
This has been seen a few times in the buildfarm since recent VACUUM
changes that added code paths that could set the all-visible flag
while holding only exclusive buffer lock. Previously, the flag
was (usually?) set only after doing LockBufferForCleanup, which
would insist on buffer pin count zero, thus preventing the flag
from becoming set partway through heap_update. However, it's
clear that it's heap_update not VACUUM that's at fault here.
What's less clear is whether there is any hazard from these bugs
in released branches. heap_update is certainly violating API
expectations, but if there is no code path that can set all-visible
without a cleanup lock then it's only a latent bug. That's not
100% certain though, besides which we should worry about extensions
or future back-patch fixes that could introduce such code paths.
I chose to back-patch to v12. Fixing RelationGetBufferForTuple
before that would require also back-patching portions of older
fixes (notably
0d1fe9f74), which is more code churn than seems
prudent to fix a hypothetical issue.
Discussion: https://postgr.es/m/
2247102.
1618008027@sss.pgh.pa.us
Noah Misch [Tue, 13 Apr 2021 02:24:41 +0000 (19:24 -0700)]
Use "-I." in directories holding Bison parsers, for Oracle compilers.
With the Oracle Developer Studio 12.6 compiler, #line directives alter
the current source file location for purposes of #include "..."
directives. Hence, a VPATH build failed with 'cannot find include file:
"specscanner.c"'. With two exceptions, parser-containing directories
already add "-I. -I$(srcdir)"; eliminate the exceptions. Back-patch to
9.6 (all supported versions).
Noah Misch [Tue, 13 Apr 2021 02:24:21 +0000 (19:24 -0700)]
Port regress-python3-mangle.mk to Solaris "sed".
It doesn't support "\(foo\)*" like a POSIX "sed" implementation does;
see the Autoconf manual. Back-patch to 9.6 (all supported versions).
Tom Lane [Mon, 12 Apr 2021 18:37:22 +0000 (14:37 -0400)]
Fix old bug with coercing the result of a COLLATE expression.
There are hacks in parse_coerce.c to push down a requested coercion
to below any CollateExpr that may appear. However, we did that even
if the requested data type is non-collatable, leading to an invalid
expression tree in which CollateExpr is applied to a non-collatable
type. The fix is just to drop the CollateExpr altogether, reasoning
that it's useless.
This bug is ten years old, dating to the original addition of
COLLATE support. The lack of field complaints suggests that there
aren't a lot of user-visible consequences. We noticed the problem
because it would trigger an assertion in DefineVirtualRelation if
the invalid structure appears as an output column of a view; however,
in a non-assert build, you don't see a crash just a (subtly incorrect)
complaint about applying collation to a non-collatable type. I found
that by putting the incorrect structure further down in a view, I could
make a view definition that would fail dump/reload, per the added
regression test case. But CollateExpr doesn't do anything at run-time,
so this likely doesn't lead to any really exciting consequences.
Per report from Yulin Pei. Back-patch to all supported branches.
Discussion: https://postgr.es/m/HK0PR01MB22744393C474D503E16C8509F4709@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
Michael Paquier [Mon, 12 Apr 2021 02:31:30 +0000 (11:31 +0900)]
Fix out-of-bound memory access for interval -> char conversion
Using Roman numbers (via "RM" or "rm") for a conversion to calculate a
number of months has never considered the case of negative numbers,
where a conversion could easily cause out-of-bound memory accesses. The
conversions in themselves were not completely consistent either, as
specifying 12 would result in NULL, but it should mean XII.
This commit reworks the conversion calculation to have a more
consistent behavior:
- If the number of months and years is 0, return NULL.
- If the number of months is positive, return the exact month number.
- If the number of months is negative, do a backward calculation, with
-1 meaning December, -2 November, etc.
Reported-by: Theodor Arsenij Larionov-Trichkin
Author: Julien Rouhaud
Discussion: https://postgr.es/m/16953-
f255a18f8c51f1d5@postgresql.org
backpatch-through: 9.6
Magnus Hagander [Fri, 9 Apr 2021 10:40:14 +0000 (12:40 +0200)]
Fix typo
Author: Daniel Westermann
Backpatch-through: 9.6
Discussion: https://postgr.es/m/GV0P278MB0483A7AA85BAFCC06D90F453D2739@GV0P278MB0483.CHEP278.PROD.OUTLOOK.COM
Michael Paquier [Fri, 9 Apr 2021 04:53:22 +0000 (13:53 +0900)]
Fix typos and grammar in documentation and code comments
Comment fixes are applied on HEAD, and documentation improvements are
applied on back-branches where needed.
Author: Justin Pryzby
Discussion: https://postgr.es/m/
20210408164008.GJ6592@telsasoft.com
Backpatch-through: 9.6
Tomas Vondra [Wed, 7 Apr 2021 13:58:35 +0000 (15:58 +0200)]
Don't add non-existent pages to bitmap from BRIN
The code in bringetbitmap() simply added the whole matching page range
to the TID bitmap, as determined by pages_per_range, even if some of the
pages were beyond the end of the heap. The query then might fail with
an error like this:
ERROR: could not open file "base/20176/20228.2" (target block
262144): previous segment is only 131021 blocks
In this case, the relation has 262093 pages (131072 and 131021 pages),
but we're trying to acess block 262144, i.e. first block of the 3rd
segment. At that point _mdfd_getseg() notices the preceding segment is
incomplete, and fails.
Hitting this in practice is rather unlikely, because:
* Most indexes use power-of-two ranges, so segments and page ranges
align perfectly (segment end is also a page range end).
* The table size has to be just right, with the last segment being
almost full - less than one page range from full segment, so that the
last page range actually crosses the segment boundary.
* Prefetch has to be enabled. The regular page access checks that
pages are not beyond heap end, but prefetch does not. On older
releases (before 12) the execution stops after hitting the first
non-existent page, so the prefetch distance has to be sufficient
to reach the first page in the next segment to trigger the issue.
Since 12 it's enough to just have prefetch enabled, the prefetch
distance does not matter.
Fixed by not adding non-existent pages to the TID bitmap. Backpatch
all the way back to 9.6 (BRIN indexes were introduced in 9.5, but that
release is EOL).
Backpatch-through: 9.6
Michael Paquier [Wed, 7 Apr 2021 10:59:27 +0000 (19:59 +0900)]
Fix potential rare failure in the kerberos TAP tests
Instead of writing a query to psql's stdin, which can cause a failure
where psql exits before writing, reporting a write failure with a broken
pipe, this changes the logic to use -c. This was not seen in the
buildfarm as no animals with a sensitive environment are running the
kerberos tests, but let's be safe.
HEAD is able to handle the situation as of
6d41dd0 for all the test
suites doing connection checks.
f44b9b6 has fixed the same problem for
the LDAP tests.
Discussion: https://postgr.es/m/YGu7ceWAiSNQDgH5@paquier.xyz
Backpatch-through: 11
Fujii Masao [Mon, 5 Apr 2021 17:25:37 +0000 (02:25 +0900)]
Shut down transaction tracking at startup process exit.
Maxim Orlov reported that the shutdown of standby server could result in
the following assertion failure. The cause of this issue was that,
when the shutdown caused the startup process to exit, recovery-time
transaction tracking was not shut down even if it's already initialized,
and some locks the tracked transactions were holding could not be released.
At this situation, if other process was invoked and the PGPROC entry that
the startup process used was assigned to it, it found such unreleased locks
and caused the assertion failure, during the initialization of it.
TRAP: FailedAssertion("SHMQueueEmpty(&(MyProc->myProcLocks[i]))"
This commit fixes this issue by making the startup process shut down
transaction tracking and release all locks, at the exit of it.
Back-patch to all supported branches.
Reported-by: Maxim Orlov
Author: Fujii Masao
Reviewed-by: Maxim Orlov
Discussion: https://postgr.es/m/
ad4ce692cc1d89a093b471ab1d969b0b@postgrespro.ru
Tom Lane [Sun, 4 Apr 2021 21:57:07 +0000 (17:57 -0400)]
Fix more confusion in SP-GiST.
spg_box_quad_leaf_consistent unconditionally returned the leaf
datum as leafValue, even though in its usage for poly_ops that
value is of completely the wrong type.
In versions before 12, that was harmless because the core code did
nothing with leafValue in non-index-only scans ... but since commit
2a6368343, if we were doing a KNN-style scan, spgNewHeapItem would
unconditionally try to copy the value using the wrong datatype
parameters. Said copying is a waste of time and space if we're not
going to return the data, but it accidentally failed to fail until
I fixed the datatype confusion in
ac9099fc1.
Hence, change spgNewHeapItem to not copy the datum unless we're
actually going to return it later. This saves cycles and dodges
the question of whether lossy opclasses are returning the right
type. Also change spg_box_quad_leaf_consistent to not return
data that might be of the wrong type, as insurance against
somebody introducing a similar bug into the core code in future.
It seems like a good idea to back-patch these two changes into
v12 and v13, although I'm afraid to change spgNewHeapItem's
mistaken idea of which datatype to use in those branches.
Per buildfarm results from
ac9099fc1.
Discussion: https://postgr.es/m/
3728741.
1617381471@sss.pgh.pa.us
Bruce Momjian [Fri, 2 Apr 2021 20:42:29 +0000 (16:42 -0400)]
Use macro MONTHS_PER_YEAR instead of '12' in /ecpg/pgtypeslib
All other places already use MONTHS_PER_YEAR appropriately.
Backpatch-through: 9.6
Joe Conway [Fri, 2 Apr 2021 17:48:48 +0000 (13:48 -0400)]
Clarify documentation of RESET ROLE
Command-line options, or previous "ALTER (ROLE|DATABASE) ...
SET ROLE ..." commands, can change the value of the default role
for a session. In the presence of one of these, RESET ROLE will
change the current user identifier to the default role rather
than the session user identifier. Fix the documentation to
reflect this reality. Backpatch to all supported versions.
Author: Nathan Bossart
Reviewed-By: Laurenz Albe, David G. Johnston, Joe Conway
Reported by: Nathan Bossart
Discussion: https://postgr.es/m/flat/
925134DB-8212-4F60-8AB1-
B1231D750CB4%40amazon.com
Backpatch-through: 9.6
Fujii Masao [Fri, 2 Apr 2021 15:07:00 +0000 (00:07 +0900)]
pg_checksums: Fix progress reporting.
pg_checksums uses two counters, total size and current size,
to calculate the progress. Previously the progress that
pg_checksums reported could not reach 100% at the end.
The cause of this issue was that the sizes of only pages excluding
new ones in each file were counted as the current size
while the size of each file is counted as the total size.
That is, the total size of all new pages could be reported
as the difference between the total size and current size.
This commit fixes this issue by making pg_checksums count
the sizes of all pages including new ones in each file as
the current size.
Back-patch to v12 where progress reporting was added to pg_checksums.
Reported-by: Shinya Kato
Author: Shinya Kato
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/TYAPR01MB289656B1ACA0A5E7CAD07BE3C47A9@TYAPR01MB2896.jpnprd01.prod.outlook.com
Michael Paquier [Fri, 2 Apr 2021 07:37:11 +0000 (16:37 +0900)]
doc: Clarify how to generate backup files with non-exclusive backups
The current instructions describing how to write the backup_label and
tablespace_map files are confusing. For example, opening a file in text
mode on Windows and copy-pasting the file's contents would result in a
failure at recovery because of the extra CRLF characters generated. The
documentation was not stating that clearly, and per discussion this is
not considered as a supported scenario.
This commit extends a bit the documentation to mention that it may be
required to open the file in binary mode before writing its data.
Reported-by: Wang Shenhao
Author: David Steele
Reviewed-by: Andrew Dunstan, Magnus Hagander
Discussion: https://postgr.es/m/
8373f61426074f2cb6be92e02f838389@G08CNEXMBPEKD06.g08.fujitsu.local
Backpatch-through: 9.6
Bruce Momjian [Fri, 2 Apr 2021 01:17:24 +0000 (21:17 -0400)]
doc: mention that intervening major releases can be skipped
Also mention that you should read the intervening major releases notes.
This change was also applied to the website.
Discussion: https://postgr.es/m/
20210330144949.GA8259@momjian.us
Backpatch-through: 9.6
Michael Paquier [Fri, 2 Apr 2021 00:44:54 +0000 (09:44 +0900)]
Improve stability of test with vacuum_truncate in reloptions.sql
This test has been using a simple VACUUM with pg_relation_size() to
check if a relation gets physically truncated or not, but forgot the
fact that some concurrent activity, like checkpoint buffer writes, could
cause some pages to be skipped. The second test enabling
vacuum_truncate could fail, seeing a non-empty relation. The first test
would not have failed, but could finish by testing a behavior different
than the one aimed for. Both tests gain a FREEZE option, to make the
vacuums more aggressive and prevent page skips.
This is similar to the issues fixed in
c2dc1a7.
Author: Arseny Sher
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/87tuotr2hh.fsf@ars-thinkpad
backpatch-through: 12
Tom Lane [Thu, 1 Apr 2021 17:34:16 +0000 (13:34 -0400)]
Fix pg_restore's misdesigned code for detecting archive file format.
Despite the clear comments pointing out that the duplicative code
segments in ReadHead() and _discoverArchiveFormat() needed to be
in sync, they were not: the latter did not bother to apply any of
the sanity checks in the former. We'd missed noticing this partly
because none of those checks would fail in scenarios we customarily
test, and partly because the oversight would be masked if both
segments execute, which they would in cases other than needing to
autodetect the format of a non-seekable stdin source. However,
in a case meeting all these requirements --- for example, trying
to read a newer-than-supported archive format from non-seekable
stdin --- pg_restore missed applying the version check and would
likely dump core or otherwise misbehave.
The whole thing is silly anyway, because there seems little reason
to duplicate the logic beyond the one-line verification that the
file starts with "PGDMP". There seems to have been an undocumented
assumption that multiple major formats (major enough to require
separate reader modules) would nonetheless share the first half-dozen
fields of the custom-format header. This seems unlikely, so let's
fix it by just nuking the duplicate logic in _discoverArchiveFormat().
Also get rid of the pointless attempt to seek back to the start of
the file after successful autodetection. That wastes cycles and
it means we have four behaviors to verify not two.
Per bug #16951 from Sergey Koposov. This has been broken for
decades, so back-patch to all supported versions.
Discussion: https://postgr.es/m/16951-
a4dd68cf0de23048@postgresql.org
Michael Paquier [Thu, 1 Apr 2021 06:28:56 +0000 (15:28 +0900)]
doc: Clarify use of ACCESS EXCLUSIVE lock in various sections
Some sections of the documentation used "exclusive lock" to describe
that an ACCESS EXCLUSIVE lock is taken during a given operation. This
can be confusing to the reader as ACCESS SHARE is allowed with an
EXCLUSIVE lock is used, but that would not be the case with what is
described on those parts of the documentation.
Author: Greg Rychlewski
Discussion: https://postgr.es/m/CAKemG7VptD=7fNWckFMsMVZL_zzvgDO6v2yVmQ+ZiBfc_06kCQ@mail.gmail.com
Backpatch-through: 9.6
Stephen Frost [Wed, 31 Mar 2021 20:23:13 +0000 (16:23 -0400)]
Add a docs section for obsoleted and renamed functions and settings
The new appendix groups information on renamed or removed settings,
commands, etc into an out-of-the-way part of the docs.
The original id elements are retained in each subsection to ensure that
the same filenames are produced for HTML docs. This prevents /current/
links on the web from breaking, and allows users of the web docs
to follow links from old version pages to info on the changes in the
new version. Prior to this change, a link to /current/ for renamed
sections like the recovery.conf docs would just 404. Similarly if
someone searched for recovery.conf they would find the pg11 docs,
but there would be no /12/ or /current/ link, so they couldn't easily
find out that it was removed in pg12 or how to adapt.
Index entries are also added so that there's a breadcrumb trail for
users to follow when they know the old name, but not what we changed it
to. So a user who is trying to find out how to set standby_mode in
PostgreSQL 12+, or where pg_resetxlog went, now has more chance of
finding that information.
Craig Ringer and Stephen Frost
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/CAGRY4nzPNOyYQ_1-pWYToUVqQ0ThqP5jdURnJMZPm539fdizOg%40mail.gmail.com
Backpatch-through: 10
Etsuro Fujita [Tue, 30 Mar 2021 04:00:03 +0000 (13:00 +0900)]
Update obsolete comment.
Back-patch to all supported branches.
Author: Etsuro Fujita
Discussion: https://postgr.es/m/CAPmGK17DwzaSf%2BB71dhL2apXdtG-OmD6u2AL9Cq2ZmAR0%2BzapQ%40mail.gmail.com
Stephen Frost [Sun, 28 Mar 2021 15:28:15 +0000 (11:28 -0400)]
doc: Define TLS as an acronym
Commit
c6763156589 added an acronym reference for "TLS" but the definition
was never added.
Author: Daniel Gustafsson
Reviewed-by: Michael Paquier
Backpatch-through: 9.6
Discussion: https://postgr.es/m/
27109504-82DB-41A8-8E63-
C0498314F5B0@yesql.se
Tomas Vondra [Fri, 26 Mar 2021 21:34:53 +0000 (22:34 +0100)]
Fix ndistinct estimates with system attributes
When estimating the number of groups using extended statistics, the code
was discarding information about system attributes. This led to strange
situation that
SELECT 1 FROM t GROUP BY ctid;
could have produced higher estimate (equal to pg_class.reltuples) than
SELECT 1 FROM t GROUP BY a, b, ctid;
with extended statistics on (a,b). Fixed by retaining information about
the system attribute.
Backpatch all the way to 10, where extended statistics were introduced.
Author: Tomas Vondra
Backpatch-through: 10
Alvaro Herrera [Thu, 25 Mar 2021 19:30:22 +0000 (16:30 -0300)]
Document lock obtained during partition detach
On partition detach, we acquire a SHARE lock on all tables that
reference the partitioned table that we're detaching a partition from,
but failed to document this fact. My oversight in commit
f56f8f8da6af.
Repair. Backpatch to 12.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/
20210325180244.GA12738@alvherre.pgsql
Alvaro Herrera [Thu, 25 Mar 2021 13:47:38 +0000 (10:47 -0300)]
Remove StoreSingleInheritance reimplementation
I introduced this duplicate code in commit
8b08f7d4820f for no good
reason. Remove it, and backpatch to 11 where it was introduced.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Fujii Masao [Thu, 25 Mar 2021 02:23:30 +0000 (11:23 +0900)]
Fix bug in WAL replay of COMMIT_TS_SETTS record.
Previously the WAL replay of COMMIT_TS_SETTS record called
TransactionTreeSetCommitTsData() with the argument write_xlog=true,
which generated and wrote new COMMIT_TS_SETTS record.
This should not be acceptable because it's during recovery.
This commit fixes the WAL replay of COMMIT_TS_SETTS record
so that it calls TransactionTreeSetCommitTsData() with write_xlog=false
and doesn't generate new WAL during recovery.
Back-patch to all supported branches.
Reported-by: lx zou <zoulx1982@163.com>
Author: Fujii Masao
Reviewed-by: Alvaro Herrera
Discussion: https://postgr.es/m/16931-
620d0f2fdc6108f1@postgresql.org
Tom Lane [Tue, 23 Mar 2021 18:27:50 +0000 (14:27 -0400)]
Fix psql's \connect command some more.
Jasen Betts reported yet another unintended side effect of commit
85c54287a: reconnecting with "\c service=whatever" did not have the
expected results. The reason is that starting from the output of
PQconndefaults() effectively allows environment variables (such
as PGPORT) to override entries in the service file, whereas the
normal priority is the other way around.
Not using PQconndefaults at all would require yet a third main code
path in do_connect's parameter setup, so I don't really want to fix
it that way. But we can have the logic effectively ignore all the
default values for just a couple more lines of code.
This patch doesn't change the behavior for "\c -reuse-previous=on
service=whatever". That remains significantly different from before
85c54287a, because many more parameters will be re-used, and thus
not be possible for service entries to replace. But I think this
is (mostly?) intentional. In any case, since libpq does not report
where it got parameter values from, it's hard to do differently.
Per bug #16936 from Jasen Betts. As with the previous patches,
back-patch to all supported branches. (9.5 is unfortunately now
out of support, so this won't get fixed there.)
Discussion: https://postgr.es/m/16936-
3f524322a53a29f0@postgresql.org
Tomas Vondra [Tue, 23 Mar 2021 03:51:53 +0000 (04:51 +0100)]
Use correct spelling of statistics kind
A couple error messages and comments used 'statistic kind', not the
correct 'statistics kind'. Fix and backpatch all the way back to 10,
where extended statistics were introduced.
Backpatch-through: 10
Fujii Masao [Tue, 23 Mar 2021 00:53:08 +0000 (09:53 +0900)]
pg_waldump: Fix bug in per-record statistics.
pg_waldump --stats=record identifies a record by a combination
of the RmgrId and the four bits of the xl_info field of the record.
But XACT records use the first bit of those four bits for an optional
flag variable, and the following three bits for the opcode to
identify a record. So previously the same type of XACT record
could have different four bits (three bits are the same but the
first one bit is different), and which could cause
pg_waldump --stats=record to show two lines of per-record statistics
for the same XACT record. This is a bug.
This commit changes pg_waldump --stats=record so that it processes
only XACT record differently, i.e., filters the opcode out of xl_info
and uses a combination of the RmgrId and those three bits as
the identifier of a record, only for XACT record. For other records,
the four bits of the xl_info field are still used.
Back-patch to all supported branches.
Author: Kyotaro Horiguchi
Reviewed-by: Shinya Kato, Fujii Masao
Discussion: https://postgr.es/m/
2020100913412132258847@highgo.ca
Michael Paquier [Mon, 22 Mar 2021 00:51:19 +0000 (09:51 +0900)]
Fix new TAP test for 2PC transactions and PITRs on Windows
The test added by
595b9cb forgot that on Windows it is necessary to set
up pg_hba.conf (see PostgresNode::set_replication_conf) with a specific
entry or base backups fail. Any node that requires to support
replication just needs to pass down allows_streaming at initialization.
This updates the test to do so. Simplify things a bit while on it.
Per buildfarm member fairywren. Any Windows hosts running this test
would have failed, and I have reproduced the problem as well.
Backpatch-through: 10
Michael Paquier [Sun, 21 Mar 2021 23:31:05 +0000 (08:31 +0900)]
Fix timeline assignment in checkpoints with 2PC transactions
Any transactions found as still prepared by a checkpoint have their
state data read from the WAL records generated by PREPARE TRANSACTION
before being moved into their new location within pg_twophase/. While
reading such records, the WAL reader uses the callback
read_local_xlog_page() to read a page, that is shared across various
parts of the system. This callback, since
1148e22a, has introduced an
update of ThisTimeLineID when reading a record while in recovery, which
is potentially helpful in the context of cascading WAL senders.
This update of ThisTimeLineID interacts badly with the checkpointer if a
promotion happens while some 2PC data is read from its record, as, by
changing ThisTimeLineID, any follow-up WAL records would be written to
an timeline older than the promoted one. This results in consistency
issues. For instance, a subsequent server restart would cause a failure
in finding a valid checkpoint record, resulting in a PANIC, for
instance.
This commit changes the code reading the 2PC data to reset the timeline
once the 2PC record has been read, to prevent messing up with the static
state of the checkpointer. It would be tempting to do the same thing
directly in read_local_xlog_page(). However, based on the discussion
that has led to
1148e22a, users may rely on the updates of
ThisTimeLineID when a WAL record page is read in recovery, so changing
this callback could break some cases that are working currently.
A TAP test reproducing the issue is added, relying on a PITR to
precisely trigger a promotion with a prepared transaction still
tracked.
Per discussion with Heikki Linnakangas, Kyotaro Horiguchi, Fujii Masao
and myself.
Author: Soumyadeep Chakraborty, Jimmy Yih, Kevin Yeap
Discussion: https://postgr.es/m/CAE-ML+_EjH_fzfq1F3RJ1=XaaNG=-Jz-i3JqkNhXiLAsM3z-Ew@mail.gmail.com
Backpatch-through: 10
Tom Lane [Sat, 20 Mar 2021 16:47:21 +0000 (12:47 -0400)]
Fix memory leak when rejecting bogus DH parameters.
While back-patching
e0e569e1d, I noted that there were some other
places where we ought to be applying DH_free(); namely, where we
load some DH parameters from a file and then reject them as not
being sufficiently secure. While it seems really unlikely that
anybody would hit these code paths in production, let alone do
so repeatedly, let's fix it for consistency.
Back-patch to v10 where this code was introduced.
Discussion: https://postgr.es/m/16160-
18367e56e9a28264@postgresql.org
Tom Lane [Sat, 20 Mar 2021 16:38:22 +0000 (12:38 -0400)]
Fix memory leak when initializing DH parameters in backend
When loading DH parameters used for the generation of ephemeral DH keys
in the backend, the code has never bothered releasing the memory used
for the DH information loaded from a file or from libpq's default. This
commit makes sure that the information is properly free()'d.
Back-patch of
e0e569e1d. We originally thought the leak was minor and
not worth back-patching, but Jelte Fennema pointed out that repeated
SIGHUP's can result in very serious bloat of the postmaster, which is
then multiplied by being duplicated into eadh forked child.
Back-patch to v10; the code looked different before
c0a15e07c,
and didn't have a leak in the actually-live code paths.
Michael Paquier
Discussion: https://postgr.es/m/16160-
18367e56e9a28264@postgresql.org
Tom Lane [Fri, 19 Mar 2021 02:21:58 +0000 (22:21 -0400)]
Don't leak malloc'd error string in libpqrcv_check_conninfo().
We leaked the error report from PQconninfoParse, when there was
one. It seems unlikely that real usage patterns would repeat
the failure often enough to create serious bloat, but let's
back-patch anyway to keep the code similar in all branches.
Found via valgrind testing.
Back-patch to v10 where this code was added.
Discussion: https://postgr.es/m/
3816764.
1616104288@sss.pgh.pa.us
Tom Lane [Fri, 19 Mar 2021 02:09:41 +0000 (22:09 -0400)]
Don't leak malloc'd strings when a GUC setting is rejected.
Because guc.c prefers to keep all its string values in malloc'd
not palloc'd storage, it has to be more careful than usual to
avoid leaks. Error exits out of string GUC hook checks failed
to clear the proposed value string, and error exits out of
ProcessGUCArray() failed to clear the malloc'd results of
ParseLongOption().
Found via valgrind testing.
This problem is ancient, so back-patch to all supported branches.
Discussion: https://postgr.es/m/
3816764.
1616104288@sss.pgh.pa.us
Tom Lane [Fri, 19 Mar 2021 01:44:43 +0000 (21:44 -0400)]
Don't leak compiled regex(es) when an ispell cache entry is dropped.
The text search cache mechanisms assume that we can clean up
an invalidated dictionary cache entry simply by resetting the
associated long-lived memory context. However, that does not work
for ispell affixes that make use of regular expressions, because
the regex library deals in plain old malloc. Hence, we leaked
compiled regex(es) any time we dropped such a cache entry. That
could quickly add up, since even a fairly trivial regex can use up
tens of kB, and a large one can eat megabytes. Add a memory context
callback to ensure that a regex gets freed when its owning cache
entry is cleared.
Found via valgrind testing.
This problem is ancient, so back-patch to all supported branches.
Discussion: https://postgr.es/m/
3816764.
1616104288@sss.pgh.pa.us
Tom Lane [Fri, 19 Mar 2021 00:50:56 +0000 (20:50 -0400)]
Don't run RelationInitTableAccessMethod in a long-lived context.
Some code paths in this function perform syscache lookups, which
can lead to table accesses and possibly leakage of cruft into
the caller's context. If said context is CacheMemoryContext,
we eventually will have visible bloat. But fixing this is no
harder than moving one memory context switch step. (The other
callers don't have a problem.)
Andres Freund and I independently found this via valgrind testing.
Back-patch to v12 where this code was added.
Discussion: https://postgr.es/m/
20210317023101.anvejcfotwka6gaa@alap3.anarazel.de
Discussion: https://postgr.es/m/
3816764.
1616104288@sss.pgh.pa.us
Tom Lane [Fri, 19 Mar 2021 00:37:09 +0000 (20:37 -0400)]
Don't leak rd_statlist when a relcache entry is dropped.
Although these lists are usually NIL, and even when not empty
are unlikely to be large, constant relcache update traffic could
eventually result in visible bloat of CacheMemoryContext.
Found via valgrind testing.
Back-patch to v10 where this field was added.
Discussion: https://postgr.es/m/
3816764.
1616104288@sss.pgh.pa.us
Magnus Hagander [Thu, 18 Mar 2021 10:23:48 +0000 (11:23 +0100)]
Fix function name in error hint
pg_read_file() is the function that's in core, pg_file_read() is in
adminpack. But when using pg_file_read() in adminpack it calls the *C*
level function pg_read_file() in core, which probably threw the original
author off. But the error hint should be about the SQL function.
Reported-By: Sergei Kornilov
Backpatch-through: 11
Discussion: https://postgr.es/m/
373021616060475@mail.yandex.ru
Tom Lane [Wed, 17 Mar 2021 20:10:38 +0000 (16:10 -0400)]
Prevent buffer overrun in read_tablespace_map().
Robert Foggia of Trustwave reported that read_tablespace_map()
fails to prevent an overrun of its on-stack input buffer.
Since the tablespace map file is presumed trustworthy, this does
not seem like an interesting security vulnerability, but still
we should fix it just in the name of robustness.
While here, document that pg_basebackup's --tablespace-mapping option
doesn't work with tar-format output, because it doesn't. To make it
work, we'd have to modify the tablespace_map file within the tarball
sent by the server, which might be possible but I'm not volunteering.
(Less-painful solutions would require changing the basebackup protocol
so that the source server could adjust the map. That's not very
appetizing either.)
Thomas Munro [Wed, 17 Mar 2021 12:06:01 +0000 (01:06 +1300)]
Revert "Fix race in Parallel Hash Join batch cleanup."
This reverts commit
8fa2478b407ef867d501fafcdea45fd827f70799.
Discussion: https://postgr.es/m/CA%2BhUKGJmcqAE3MZeDCLLXa62cWM0AJbKmp2JrJYaJ86bz36LFA%40mail.gmail.com
Thomas Munro [Wed, 17 Mar 2021 04:46:39 +0000 (17:46 +1300)]
Fix race in Parallel Hash Join batch cleanup.
With very unlucky timing and parallel_leader_participation off, PHJ
could attempt to access per-batch state just as it was being freed.
There was code intended to prevent that by checking for a cleared
pointer, but it was buggy.
Fix, by introducing an extra barrier phase. The new phase
PHJ_BUILD_RUNNING means that it's safe to access the per-batch state to
find a batch to help with, and PHJ_BUILD_DONE means that it is too late.
The last to detach will free the array of per-batch state as before, but
now it will also atomically advance the phase at the same time, so that
late attachers can avoid the hazard, without the data race. This
mirrors the way per-batch hash tables are freed (see phases
PHJ_BATCH_PROBING and PHJ_BATCH_DONE).
Revealed by a one-off build farm failure, where BarrierAttach() failed a
sanity check assertion, because the memory had been clobbered by
dsa_free().
Back-patch to 11, where the code arrived.
Reported-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/
20200929061142.GA29096%40paquier.xyz
Tom Lane [Tue, 16 Mar 2021 20:02:49 +0000 (16:02 -0400)]
Avoid corner-case memory leak in SSL parameter processing.
After reading the root cert list from the ssl_ca_file, immediately
install it as client CA list of the new SSL context. That gives the
SSL context ownership of the list, so that SSL_CTX_free will free it.
This avoids a permanent memory leak if we fail further down in
be_tls_init(), which could happen if bogus CRL data is offered.
The leak could only amount to something if the CRL parameters get
broken after server start (else we'd just quit) and then the server
is SIGHUP'd many times without fixing the CRL data. That's rather
unlikely perhaps, but it seems worth fixing, if only because the
code is clearer this way.
While we're here, add some comments about the memory management
aspects of this logic.
Noted by Jelte Fennema and independently by Andres Freund.
Back-patch to v10; before commit
de41869b6 it doesn't matter,
since we'd not re-execute this code during SIGHUP.
Discussion: https://postgr.es/m/16160-
18367e56e9a28264@postgresql.org