Age | Commit message (Collapse) | Author |
|
array.
This fixes the problem when a slave for only one master datanode or coordinator
is added, as demonstrated by the tap tests
|
|
current value of val_used
|
|
|
|
|
|
Add some more cleanup to TAP tests.
|
|
Test add/remove nodes and replicas
|
|
master
|
|
We'd earlier turned that on so that PQping() can check status of standbys. But
that clearly creates bigger trouble and standbys may just stop working. So add
a new mechanism to ping slave nodes by using pg_ctl
|
|
server.
Before starting or initialising a new GTM/GTM proxy, we first try to stop
running server. But if server is not running, which is the case most often, it
will show an error This avoids those unnecessary error messages
|
|
|
|
Now we have far more information embedded in the GID string. So there is no
point looking for [0-9]+ pattern beyond the prefix. Just assume external tools
would never use that pattern (PREPARE TRANSACTION should anyway throw an error
if they try to use the pattern)
|
|
Accept expected XL behavior. Add ORDER BY where needed.
|
|
Separate out Issue #73 in Issue Tracker.
Accept XL query plans.
|
|
|
|
Add order by clause for consistent test results from
json_agg() function
|
|
messages in a file or a module.
We now support a new --enable-genmsgids configure option. When compiled with
this, superusers can run pg_msgmodule_set(moduleid, fileid, msgid, newlevel)
command to override the log level specified in the source code.
There are many TODOs and limitations of this approach. We could only, for
example, increase logging level of messages i.e. turn DEBUG2 to DEBUG1 or DEBUG
to LOG. But we can't change ERROR to PANIC or supress a log message. Also, we
are using a very sparse representation of the message log levels. This
increases memory requirements significantly, though should speed up lookups and
keep the code simple.
When configured with --enable-genmsgids, a file named MSGMODULES is created at
the top of the build tree and the module-ids are later picked from that file
while compiling individual files. We would also preprocess each file before
compilation and save all elog() calls, along with the file_name, line_number,
messgae, module_id, file_id, msg_id in MSGIDS file. This file can then be used
to lookup the messages so that correct information is passed. This clearly
needs a lot more polishing and work.
When configued without --enable-genmsgids, we don't expect this facility to add
overhead because all codes gets #ifdef-ed out
|
|
the cluster one at a time.
A new option "prepare config empty" is now supported which sets up an almost
empty conf file and all components, including GTM, can be added one at a time
|
|
|
|
other platforms.
|
|
|
|
|
|
slaves in pgxc_ctl.conf file as well as corresponding "add" commands.
Recent releases of Postgres now allow users to specify a separate XLOG dir and
initdb time and we extend the same facility to pgxc_ctl.
|
|
increased GIDSIZE
Per report by Tobias Oberstein
|
|
We were wrongly doing this check on the local node, which was clearly wrong.
|
|
transactions
When a two-phase commit protocol is interrupted mid-way, for example because of
a server crash, it can leave behind unresolved prepared transactions which must
be resolved when the node comes back. Postgres-XL provides a pgxc_clean utility
to lookup list of prepared transactions and resolve them based on status of
such a transaction on every node. But there were many issues with the utility
because of which it would either fail to resolve all transactions, or worse
resolve it in a wrong manner. This commit fixes all such issues discovered
during some simple crash recovery testing.
One of the problem with the current approach was that the utility would not
know which all nodes were involved in a transaction. So if it sees a
transaction as prepared on a subset of nodes, but does not exist on other
subset, it would not know if originally it was executed on nodes other than
where its prepared, but other nodes failed before they could prepare the
transaction. If it was indeed executed on other nodes, such transaction must be
aborted. Whereas if it was only executed on the set of nodes where its
currently prepared, then it can safely be committed.
We now store the information about nodes partcipating in a 2PC directly in the
GID. This of course has a downside of increasing the GIDSIZE which implies for
shared memory requirement. But with today's server sizes, it should not be a
very big concern. Sure, we could also look at possibility of storing this
information externally, such as on the GTM. But the fix seems good enough for
now.
|
|
The old "low-level" API is deprecated, and doesn't support hardware
acceleration. And this makes the code simpler, too.
Discussion: <561274F1.1030000@iki.fi>
|
|
This adds a new routine, pg_strong_random() for generating random bytes,
for use in both frontend and backend. At the moment, it's only used in
the backend, but the upcoming SCRAM authentication patches need strong
random numbers in libpq as well.
pg_strong_random() is based on, and replaces, the existing implementation
in pgcrypto. It can acquire strong random numbers from a number of sources,
depending on what's available:
- OpenSSL RAND_bytes(), if built with OpenSSL
- On Windows, the native cryptographic functions are used
- /dev/urandom
- /dev/random
Original patch by Magnus Hagander, with further work by Michael Paquier
and me.
Discussion: <CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com>
|
|
Similar to 0137caf273, this makes contrib and pl tests less dependant on
hash-table order. After this commit, at least some order affecting
changes to execGrouping.c don't result in regression test changes
anymore.
|
|
Windows apparently has a constant named WAIT_TIMEOUT, and some of these
other names are pretty generic, too. Insert "PG_" at the front of each
name in order to disambiguate.
Michael Paquier
|
|
Just turning the crank on the project started in commit d51924be8.
These cases turn out to be exact subsets of the boilerplate needed
for hstore_plpython.
Discussion: <2652.1475512158@sss.pgh.pa.us>
|
|
WaitLatch, WaitLatchOrSocket, and WaitEventSetWait now taken an
additional wait_event_info parameter; legal values are defined in
pgstat.h. This makes it possible to uniquely identify every point in
the core code where we are waiting for a latch; extensions can pass
WAIT_EXTENSION.
Because latches were the major wait primitive not previously covered
by this patch, it is now possible to see information in
pg_stat_activity on a large number of important wait events not
previously addressed, such as ClientRead, ClientWrite, and SyncRep.
Unfortunately, many of the wait events added by this patch will fail
to appear in pg_stat_activity because they're only used in background
processes which don't currently appear in pg_stat_activity. We should
fix this either by creating a separate view for such information, or
else by deciding to include them in pg_stat_activity after all.
Michael Paquier and Robert Haas, reviewed by Alexander Korotkov and
Thomas Munro.
|
|
In commit d51924be8, I overlooked the need to provide linkage for
PLyUnicode_FromStringAndSize, because that's only used (and indeed
only exists) in Python 3 builds.
In light of the need to #if this item, rearrange the ordering of
the code related to each function pointer, so as not to need more
#if's than absolutely necessary.
Per buildfarm.
|
|
Before initializing iteration over a subtransaction's changes, the last
few changes were not spilled to disk. That's correct if the transaction
didn't spill to disk, but otherwise... This bug can lead to missed or
misorderd subtransaction contents when they were spilled to disk.
Move spilling of the remaining in-memory changes to
ReorderBufferIterTXNInit(), where it can easily be applied to the top
transaction and, if present, subtransactions.
Since this code had too many bugs already, noticeably increase test
coverage.
Fixes: #14319
Reported-By: Huan Ruan
Discussion: <20160909012610.20024.58169@wrigleys.postgresql.org>
Backport: 9,4-, where logical decoding was added
|
|
Previously, on most platforms, we allowed hstore_plpython's references
to hstore and plpython to be unresolved symbols at link time, trusting
the dynamic linker to resolve them when the module is loaded. This
has a number of problems, the worst being that the dynamic linker
does not know where the references come from and can do nothing but
fail if those other modules haven't been loaded. We've more or less
gotten away with that for the limited use-case of datatype transform
modules, but even there, it requires some awkward hacks, most recently
commit 83c249200.
Instead, let's not treat these references as linker-resolvable at all,
but use function pointers that are manually filled in by the module's
_PG_init function. There are few enough contact points that this
doesn't seem unmaintainable, at least for these use-cases. (Note that
the same technique wouldn't work at all for decoupling from libpython
itself, but fortunately that's just a standard shared library and can
be linked to normally.)
This is an initial patch that just converts hstore_plpython. If the
buildfarm doesn't find any fatal problems, I'll work on the other
transform modules soon.
Tom Lane, per an idea of Andres Freund's.
Discussion: <2652.1475512158@sss.pgh.pa.us>
|
|
collect_corrupt_items() failed to initialize tuple.t_self. While
HeapTupleSatisfiesVacuum() doesn't actually use that value, it does
Assert that it's valid, so that the code would dump core if ip_posid
chanced to be zero. (That's somewhat unlikely, which probably explains
how this got missed. In any case it wouldn't matter for field use.)
Also, collect_corrupt_items was returning the wrong TIDs, that is the
contents of t_ctid rather than the tuple's own location. This would
be the same thing in simple cases, but it could be wrong if, for
example, a past update attempt had been rolled back, leaving a live
tuple whose t_ctid doesn't point at itself.
Also, in pg_visibility(), guard against trying to read a page past
the end of the rel. The VM code handles inquiries beyond the end
of the map by silently returning zeroes, and it seems like we should
do the same thing here.
I ran into the assertion failure while using pg_visibility to check
pg_upgrade's behavior, and then noted the other problems while
reading the code.
Report: <29043.1475288648@sss.pgh.pa.us>
|
|
Prototypes for functions implementing V1-callable functions are no
longer necessary.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
|
|
Using offsetof() with a run-time computed argument is not allowed in
either C or C++. Apparently, gcc allows it, but g++ doesn't.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
|
|
Now that we track initial privileges on extension objects and changes to
those permissions, we can drop the superuser() checks from the various
functions which are part of the pgstattuple extension and rely on the
GRANT system to control access to those functions.
Since a pg_upgrade will preserve the version of the extension which
existed prior to the upgrade, we can't simply modify the existing
functions but instead need to create new functions which remove the
checks and update the SQL-level functions to use the new functions
(and to REVOKE EXECUTE rights on those functions from PUBLIC).
Thanks to Tom and Andres for adding support for extensions to follow
update paths (see: 40b449a), allowing this patch to be much smaller
since no new base version script needed to be included.
Approach suggested by Noah.
Reviewed by Michael Paquier.
|
|
This patch just exposes COPY's FROM PROGRAM option in contrib/file_fdw.
There don't seem to be any security issues with that that are any worse
than what already exist with file_fdw and COPY; as in the existing cases,
only superusers are allowed to control what gets executed.
A regression test case might be nice here, but choosing a 100% portable
command to run is hard. (We haven't got a test for COPY FROM PROGRAM
itself, either.)
Corey Huinker and Adam Gomaa, reviewed by Amit Langote
Discussion: <CADkLM=dGDGmaEiZ=UDepzumWg-CVn7r8MHPjr2NArj8S3TsROQ@mail.gmail.com>
|
|
That makes the view a lot less disruptive to use on a production system.
Without the locks, you don't get a consistent snapshot across all buffers,
but that's OK. It wasn't a very useful guarantee in practice.
Ivan Kartyshov, reviewed by Tomas Vondra and Robert Haas.
Discusssion: <f9d6cab2-73a7-7a84-55a8-07dcb8516ae5@postgrespro.ru>
|
|
It's always been possible to create a zero-dimensional cube by converting
from a zero-length float8 array, but cube_in failed to accept the '()'
representation that cube_out produced for that case, resulting in a
dump/reload hazard. Make it accept the case. Also fix a couple of
other places that didn't behave sanely for zero-dimensional cubes:
cube_size would produce 1.0 when surely the answer should be 0.0,
and g_cube_distance risked a divide-by-zero failure.
Likewise, it's always been possible to create cubes containing float8
infinity or NaN coordinate values, but cube_in couldn't parse such input,
and cube_out produced platform-dependent spellings of the values. Convert
them to use float8in_internal and float8out_internal so that the behavior
will be the same as for float8, as we recently did for the core geometric
types (cf commit 50861cd68). As in that commit, I don't pretend that this
patch fixes all insane corner-case behaviors that may exist for NaNs, but
it's a step forward.
(This change allows removal of the separate cube_1.out and cube_3.out
expected-files, as the platform dependency that previously required them
is now gone: an underflowing coordinate value will now produce an error
not plus or minus zero.)
Make errors from cube_in follow project conventions as to spelling
("invalid input syntax for cube" not "bad cube representation")
and errcode (INVALID_TEXT_REPRESENTATION not SYNTAX_ERROR).
Also a few marginal code cleanups and comment improvements.
Tom Lane, reviewed by Amul Sul
Discussion: <15085.1472494782@sss.pgh.pa.us>
|
|
LibreSSL defines OPENSSL_VERSION_NUMBER to claim that it is version 2.0.0,
but it doesn't have the functions added in OpenSSL 1.1.0. Add autoconf
checks for the individual functions we need, and stop relying on
OPENSSL_VERSION_NUMBER.
Backport to 9.5 and 9.6, like the patch that broke this. In the
back-branches, there are still a few OPENSSL_VERSION_NUMBER checks left,
to check for OpenSSL 0.9.8 or 0.9.7. I left them as they were - LibreSSL
has all those functions, so they work as intended.
Per buildfarm member curculio.
Discussion: <2442.1473957669@sss.pgh.pa.us>
|
|
Otherwise, users who have configured shared_buffers >= 256GB won't
be able to use this module. There probably aren't many of those, but
it doesn't hurt anything to fix it so that it works.
Backpatch to 9.4, where MemoryContextAllocHuge was introduced. The
same problem exists in older branches, but there's no easy way to
fix it there.
KaiGai Kohei
|
|
Changes needed to build at all:
- Check for SSL_new in configure, now that SSL_library_init is a macro.
- Do not access struct members directly. This includes some new code in
pgcrypto, to use the resource owner mechanism to ensure that we don't
leak OpenSSL handles, now that we can't embed them in other structs
anymore.
- RAND_SSLeay() -> RAND_OpenSSL()
Changes that were needed to silence deprecation warnings, but were not
strictly necessary:
- RAND_pseudo_bytes() -> RAND_bytes().
- SSL_library_init() and OpenSSL_config() -> OPENSSL_init_ssl()
- ASN1_STRING_data() -> ASN1_STRING_get0_data()
- DH_generate_parameters() -> DH_generate_parameters()
- Locking callbacks are not needed with OpenSSL 1.1.0 anymore. (Good
riddance!)
Also change references to SSLEAY_VERSION_NUMBER with OPENSSL_VERSION_NUMBER,
for the sake of consistency. OPENSSL_VERSION_NUMBER has existed since time
immemorial.
Fix SSL test suite to work with OpenSSL 1.1.0. CA certificates must have
the "CA:true" basic constraint extension now, or OpenSSL will refuse them.
Regenerate the test certificates with that. The "openssl" binary, used to
generate the certificates, is also now more picky, and throws an error
if an X509 extension is specified in "req_extensions", but that section
is empty.
Backpatch to all supported branches, per popular demand. In back-branches,
we still support OpenSSL 0.9.7 and above. OpenSSL 0.9.6 should still work
too, but I didn't test it. In master, we only support 0.9.8 and above.
Patch by Andreas Karlsson, with additional changes by me.
Discussion: <20160627151604.GD1051@msg.df7cb.de>
|
|
Add a location field to the DefElem struct, used to parse many utility
commands. Update various error messages to supply error position
information.
To propogate the error position information in a more systematic way,
create a ParseState in standard_ProcessUtility() and pass that to
interested functions implementing the utility commands. This seems
better than passing the query string and then reassembling a parse state
ad hoc, which violates the encapsulation of the ParseState type.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
|
|
When building libpq, ip.c and md5.c were symlinked or copied from
src/backend/libpq into src/interfaces/libpq, but now that we have a
directory specifically for routines that are shared between the server and
client binaries, src/common/, move them there.
Some routines in ip.c were only used in the backend. Keep those in
src/backend/libpq, but rename to ifaddr.c to avoid confusion with the file
that's now in common.
Fix the comment in src/common/Makefile to reflect how libpq actually links
those files.
There are two more files that libpq symlinks directly from src/backend:
encnames.c and wchar.c. I don't feel compelled to move those right now,
though.
Patch by Michael Paquier, with some changes by me.
Discussion: <69938195-9c76-8523-0af8-eb718ea5b36e@iki.fi>
|
|
Where possible, use palloc or pg_malloc instead; otherwise, insert
explicit NULL checks.
Generally speaking, these are places where an actual OOM is quite
unlikely, either because they're in client programs that don't
allocate all that much, or they're very early in process startup
so that we'd likely have had a fork() failure instead. Hence,
no back-patch, even though this is nominally a bug fix.
Michael Paquier, with some adjustments by me
Discussion: <CAB7nPqRu07Ot6iht9i9KRfYLpDaF2ZuUv5y_+72uP23ZAGysRg@mail.gmail.com>
|
|
The previous API for this function had it returning a malloc'd string.
That meant that callers had to check for NULL return, which few of them
were doing, and it also meant that callers had to remember to free()
the string later, which required extra logic in most cases.
Instead, make simple_prompt() write into a buffer supplied by the caller.
Anywhere that the maximum required input length is reasonably small,
which is almost all of the callers, we can just use a local or static
array as the buffer instead of dealing with malloc/free.
A fair number of callers used "pointer == NULL" as a proxy for "haven't
requested the password yet". Maintaining the same behavior requires
adding a separate boolean flag for that, which adds back some of the
complexity we save by removing free()s. Nonetheless, this nets out
at a small reduction in overall code size, and considerably less code
than we would have had if we'd added the missing NULL-return checks
everywhere they were needed.
In passing, clean up the API comment for simple_prompt() and get rid
of a very-unnecessary malloc/free in its Windows code path.
This is nominally a bug fix, but it does not seem worth back-patching,
because the actual risk of an OOM failure in any of these places seems
pretty tiny, and all of them are client-side not server-side anyway.
This patch is by me, but it owes a great deal to Michael Paquier
who identified the problem and drafted a patch for fixing it the
other way.
Discussion: <CAB7nPqRu07Ot6iht9i9KRfYLpDaF2ZuUv5y_+72uP23ZAGysRg@mail.gmail.com>
|
|
OpenSSL officially only supports 1.0.1 and newer. Some OS distributions
still provide patches for 0.9.8, but anything older than that is not
interesting anymore. Let's simplify things by removing compatibility code.
Andreas Karlsson, with small changes by me.
|
|
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters. While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea. Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.
While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.
In passing, change TopMemoryContext to use the default allocation
parameters. Formerly it could only be extended 8K at a time. That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there. There seems no good reason
not to let it use growing blocks like most other contexts.
Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain. The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.
Discussion: <21072.1472321324@sss.pgh.pa.us>
|