Age | Commit message (Collapse) | Author |
|
PG 10 changed the way sequences are stored in the catalogs. The merge broke
some of those things and this commit attempts to fix most of them. We still see
some sequence related failures in the regression tests, but some of the most
obvious failures are now fixed.
|
|
When, during logical decoding, a transaction gets too big, it's
contents get spilled to disk. Not just the top-transaction gets
spilled, but *also* all of its subtransactions, even if they're not
that large themselves. Unfortunately we didn't clean up
such small spilled subtransactions from disk.
Fix that, by keeping better track of whether a transaction has been
spilled to disk.
Author: Andres Freund
Reported-By: Dmitriy Sarafannikov, FabrÃzio de Royes Mello
Discussion:
https://postgr.es/m/1457621358.355011041@f382.i.mail.ru
https://postgr.es/m/CAFcNs+qNMhNYii4nxpO6gqsndiyxNDYV0S=JNq0v_sEE+9PHXg@mail.gmail.com
Backpatch: 9.4-, where logical decoding was introduced
|
|
Author: Yugo Nagata
|
|
|
|
Author: Julien Rouhaud <julien.rouhaud@dalibo.com>
|
|
Author: Julien Rouhaud <julien.rouhaud@dalibo.com>
|
|
With roundrobin node, the initial node was always set to the first node
in the list. That works fine when inserting many rows at once (e.g. with
INSERT SELECT), but with single-row inserts this puts all data on the
first node, resulting in unbalanced distribution.
This randomizes the choice of the initial node, so that with single-row
inserts the ROUNDROBIN behaves a bit like RANDOM distribution.
This also removes unnecessary srand() call from RelationBuildLocator(),
located after a call to rand().
|
|
Due to a coding issue in IsDistColumnForRelId() it was possible to drop
columns that were used as modulo distribution keys. A simple example
demonstrates the behavior:
CREATE TABLE t (a INT, b INT, c INT) DISTRIBUTE BY MODULO (a);
ALTER TABLE t DROP COLUMN a;
test=# \d+ t
Table "public.t"
Column | Type | Modifiers | Storage | Stats target | Description
--------+---------+-----------+---------+--------------+-------------
b | integer | | plain | |
c | integer | | plain | |
Distribute By: MODULO(........pg.dropped.1........)
Location Nodes: ALL DATANODES
With this commit, the ALTER TABLE command fails as expected:
ERROR: Distribution column cannot be dropped
The commit simplifies the coding a bit, and removes several functions
that were not needed anymore (and unused outside locator.c).
|
|
Until now BIGINT data type was not supported by MODULO distribution and
attempts to create such tables failed. This patch removes the limitation.
The compute_modulo() function originally used an optimized algorithm
from http://www.graphics.stanford.edu/~seander/bithacks.html (namely the
one described in section "Compute modulus division by (1 << s) - 1 in
parallel without a division operator") to compute the modulo. But that
algorithm version only supported 32-bit values, and so would require
changes to support 64-bit values. Instead, I've decided to simply drop
that code and use simple % operator, which should translate to IDIV
instruction.
Judging by benchmarks (MODULO on INTEGER column), switching to plain
modulo (%) might result in about 1% slowdown, but it might easily be
just noise caused by different binary layout due to code changes. In
fact, the simplified algorithm is much less noisy in this respect.
|
|
There's no plpgsql_1.out at the upstream, only plpgsql.out. So make it
consistent, which means the file will also receive upstream changes.
|
|
The fixed differences had mostly trivial causes:
1) The expected output was missing for several tests, most likely due to
initial resolution of a merge conflict (while it was not possible to
run the tests, making verification impossible). This includes blocks
labeled as
- Test handling of expanded arrays
- Test for proper handling of cast-expression caching
and a few more smaller ones.
2) Change in spelling of messages, e.g. from
CONTEXT: PL/pgSQL function footest() line 5 at EXECUTE statement
to
CONTEXT: PL/pgSQL function footest() line 5 at EXECUTE
3) Change in displaying context for notices, warnings and errors, which
was reworked by 0426f349effb6bde2061f3398a71db7180c97dd9. Since that
commit we only show the context for errors by default.
|
|
Author: Julien Rouhaud <julien.rouhaud@dalibo.com>
|
|
This ensures that triggers can see an up-to-date timestamp.
Reported-by: Konstantin Evteev <konst583@gmail.com>
|
|
Author: Michael Paquier <michael.paquier@gmail.com>
|
|
Author: Daniel Gustafsson <daniel@yesql.se>
|
|
This should normally be determined by a configure check, but until
someone figures out how to do that on Windows, it's better that the code
uses the new function by default.
|
|
If a .c or .h file corresponds to a .y or .l file, skip indenting it.
There's no point in reindenting derived files, and these files tend to
confuse pgindent. (Which probably indicates a bug in BSD indent, but
I can't get excited about trying to fix it.)
For the same reasons, add src/backend/utils/fmgrtab.c to the set of
files excluded by src/tools/pgindent/exclude_file_patterns.
The point of doing this is that it makes it safe to run pgindent over
the tree without doing "make maintainer-clean" first. While these are
not the only derived .c/.h files in the tree, they are the only ones
pgindent fails on. Removing that prerequisite step results in one less
way to mess up a pgindent run, and it's necessary if we ever hope to get
to the ease of running pgindent via "make indent".
|
|
|
|
This allows us to combine the opening and the ownership check.
Reported-by: Robert Haas <robertmhaas@gmail.com>
|
|
Windows uses a separate code path for libc locales. The code previously
ended up there also if an ICU collation should be used, leading to a
crash.
Reported-by: Ashutosh Sharma <ashu.coek88@gmail.com>
|
|
|
|
When a new base type is created using the old-style procedure of first
creating the input/output functions with "opaque" in place of the base
type, the "opaque" argument/return type is changed to the final base type,
on CREATE TYPE. However, we did not create a pg_depend record when doing
that, so the functions were left not depending on the type.
Fixes bug #14706, reported by Karen Huddleston.
Discussion: https://www.postgresql.org/message-id/20170614232259.1424.82774@wrigleys.postgresql.org
|
|
This is a new utility statement added in PG 10 and we should ensure that it
gets propagated to all the nodes in the cluster.
|
|
The _equalTableFunc() omission of coltypmods has semantic significance,
but I did not track down resulting user-visible bugs, if any. The other
changes are cosmetic only, affecting order. catversion bump due to
readfuncs.c field order change.
|
|
read/out functions for this node type were missing. So implement those
functions. In addition, the FQS code path was not recongnizing this new node
type correctly. Fix that too.
The ruleutils also missed ability to deparse this expression. For now we just
emit a DEFAULT clause while deparsing NextValueExpr and assume that the remote
node will do the necessary lookups to find the correct sequence and invoke
nextval() on the sequence.
|
|
A new attribute got added to pg_attribute catalog in PG 10, we must update the
default FormData_pg_attribute for XL's xc_node_id system attribute with the
correct initial values.
|
|
The TAP tests mostly don't work without IPC::Run, and the reason for
the failure is not immediately obvious from the error messages you get.
So teach configure to reject --enable-tap-tests unless IPC::Run exists.
Mostly this just involves adding ax_prog_perl_modules.m4 from the GNU
autoconf archives.
This was discussed last year, but we held off on the theory that we might
be switching to CMake soon. That's evidently not happening for v10,
so let's absorb this now.
Eugene Kazakov and Michael Paquier
Discussion: https://postgr.es/m/56BDDC20.9020506@postgrespro.ru
Discussion: https://postgr.es/m/CAB7nPqRVKG_CR4Dy_AMfE6DXcr6F7ygy2goa2atJU4XkerDRUg@mail.gmail.com
|
|
We had three occurrences of essentially the same coding pattern
wherein we tried to retrieve a query result from a libpq connection
without blocking. In the case where PQconsumeInput failed (typically
indicating a lost connection), all three loops simply gave up and
returned, forgetting to clear any previously-collected PGresult
object. Since those are malloc'd not palloc'd, the oversight results
in a process-lifespan memory leak.
One instance, in libpqwalreceiver, is of little significance because
the walreceiver process would just quit anyway if its connection fails.
But we might as well fix it.
The other two instances, in postgres_fdw, are somewhat more worrisome
because at least in principle the scenario could be repeated, allowing
the amount of memory leaked to build up to something worth worrying
about. Moreover, in these cases the loops contain CHECK_FOR_INTERRUPTS
calls, as well as other calls that could potentially elog(ERROR),
providing another way to exit without having cleared the PGresult.
Here we need to add PG_TRY logic similar to what exists in quite a
few other places in postgres_fdw.
Coverity noted the libpqwalreceiver bug; I found the other two cases
by checking all calls of PQconsumeInput.
Back-patch to all supported versions as appropriate (9.2 lacks
postgres_fdw, so this is really quite unexciting for that branch).
Discussion: https://postgr.es/m/22620.1497486981@sss.pgh.pa.us
|
|
pg_upgrade never used Windows junction points but instead always used
Windows hard links.
Reported-by: Adrian Klaver
Discussion: https://postgr.es/m/6a638c60-90bb-4921-8ee4-5fdad68f8b09@aklaver.com
Backpatch-through: 9.3, where the mention first appeared
|
|
It was unsafe to instruct users to start/stop the server after
pg_upgrade was run but before the standby servers were rsync'ed. The
new instructions avoid this.
RELEASE NOTES: This fix should be mentioned in the minor release notes.
Reported-by: Dmitriy Sarafannikov and Sergey Burladyan
Discussion: https://postgr.es/m/87wp8o506b.fsf@seb.koffice.internal
Backpatch-through: 9.5, where standby server upgrade instructions first appeared
|
|
Avoid using prefix "staext" when everything else uses "statext".
Author: Kyotaro HORIGUCHI
Discussion: https://postgr.es/m/20170615.140041.165731947.horiguchi.kyotaro@lab.ntt.co.jp
|
|
Show "All tables" property in \dRp and \dRp+. Don't list tables for
such publications in \dRp+, since it's redundant and the list could be
very long.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Author: Jeff Janes <jeff.janes@gmail.com>
|
|
Author: Daniel Gustafsson <daniel@yesql.se>
|
|
This is no longer needed because the tests use PostgresNode.
Reported-by: Michael Paquier <michael.paquier@gmail.com>
|
|
While using \d and associated commands in psql, we now check for partitioned
tables and fetch their distribution information too, just as regular tables.
|
|
While dealing with XL distribution, we check if the table is a regular table or
not. Since partitioned tables must get the same treatment as regular table as
far as distribution goes, we check and allow partitioned tables too.
|
|
Merge upstream master branch upto e800656d9a9b40b2f55afabe76354ab6d93353b3.
Code compiles and regression works ok (with lots and lots of failures though).
|
|
We mustn't use a global variable since concurrent GTM threads might try to
compute a snapshot at the same time and may overwrite the information before a
thread can send the complete snapshot to the client. Chi Gao
<chi.gao@microfun.com> reported that this can cause infinite wait on the client
side because the client expects N bytes of data, but only receives (N - x)
bytes and it keeps waiting for remaining x bytes which the GTM never sends.
While we don't have a report, it's obvious that it can also go wrong in the
other direction.
We fix this by using a thread-specific storage which ensures that the snapshot
cannot be changed while it's being sent to the client.
Report, analysis and fix by Chi Gao <chi.gao@microfun.com>. Some minor
editorialisation by me.
Backpatched to XL9_5_STABLE
|
|
At one place we missed out an END_CRIT_SECTION(), so put that back. Also,
GetGlobalSeqName() was using a wrong relation while generating the sequence
name. This was caused as a result of the merge which introduced a new code
block in between, thus setting the local variable referring the sequence
relation to the pg_catalog.pg_sequence relation.
|
|
|
|
While executing RemoteSubplan we'd accidentally set "execute_once" to true,
even though that wasn't appropriate. Correct that mistake and always use the
information in the Portal to decide whether to execute once or more.
|
|
Starting PG 10, even utility statements are wrapped in a PlannedStmt. So we
must ensure that we are dealing with non-utility statements before trying to
look into the planTree because it won't be set for utility statements.
|
|
Starting PG 10, pg_parse_query() returns a list of RawStmt unlike a list of
parse trees. The actual parse tree is now available as RawStmt->stmt. So we
must look into the correct place to check if the supplied query is one of the
special statements such as VACUUM, CLUSTER or CREATE INDEX statement, which
needs special handling.
|
|
We must convert OID into a qualified name before sending it down to the remote
datanode and do the reverse on the remote end. This is a new node type added in
PG 10 and hence support was missing.
|
|
PG 10 now has facilities to read Plan and Scan nodes. So we don't need our own
implementation for that. Makes the code a bit shorter and cleaner too.
|
|
"tapeset" may not be setup when we are using tuplesort to merge tuples from
remote connections. So avoid referencing tapeset members.
|
|
Patch by Yugo Nagata.
|
|
It was explained that read only transactions (not in standby) allow to
update sequences. This had been wrong since the commit:
05d8a561ff85db1545f5768fe8d8dc9d99ad2ef7
Discussion: https://www.postgresql.org/message-id/20170614.110826.425627939780392324.t-ishii%40sraoss.co.jp
|
|
Commit 18ce3a4ab22d2984f8540ab480979c851dae5338 failed to update
the comments in parsenodes.h for the new members, and made only
incomplete updates to src/backend/nodes
Thomas Munro, per a report from Noah Misch.
Discussion: http://postgr.es/m/20170611062525.GA1628882@rfd.leadboat.com
|
|
Previously we required every exported transaction to have an xid
assigned. That was used to check that the exporting transaction is
still running, which in turn is needed to guarantee that that
necessary rows haven't been removed in between exporting and importing
the snapshot.
The exported xid caused unnecessary problems with logical decoding,
because slot creation has to wait for all concurrent xid to finish,
which in turn serializes concurrent slot creation. It also
prohibited snapshots to be exported on hot-standby replicas.
Instead export the virtual transactionid, which avoids the unnecessary
serialization and the inability to export snapshots on standbys. This
changes the file name of the exported snapshot, but since we never
documented what that one means, that seems ok.
Author: Petr Jelinek, slightly editorialized by me
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/f598b4b8-8cd7-0d54-0939-adda763d8c34@2ndquadrant.com
|