summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2017-08-02Add pgtcl back to the list of externally-maintained client interfaces.Tom Lane
FlightAware is still maintaining this, and indeed is seemingly being more active with it than the pgtclng fork is. List both, for the time being anyway. In the back branches, also back-port commit e20f679f6 and other recent updates to the client-interfaces list. I think these are probably of current interest to users of back branches. I did not touch the list of externally maintained PLs in the back branches, though. Those are much more likely to be server version sensitive, and I don't know which of these PLs work all the way back. Discussion: https://postgr.es/m/20170730162612.1449.58796@wrigleys.postgresql.org
2017-08-02Remove broken and useless entry-count printing in HASH_DEBUG code.Tom Lane
init_htab(), with #define HASH_DEBUG, prints a bunch of hashtable parameters. It used to also print nentries, but commit 44ca4022f changed that to "hash_get_num_entries(hctl)", which is wrong (the parameter should be "hashp"). Rather than correct the coding, though, let's just remove that field from the printout. The table must be empty, since we just finished building it, so expensively calculating the number of entries is rather pointless. Moreover hash_get_num_entries makes assumptions (about not needing locks) which we could do without in debugging code. Noted by Choi Doo-Won in bug #14764. Back-patch to 9.6 where the faulty code was introduced. Discussion: https://postgr.es/m/20170802032353.8424.12274@wrigleys.postgresql.org
2017-08-02Get a snapshot before COPY in table syncPeter Eisentraut
This fixes a crash if the local table has a function index and the function makes non-immutable calls. Reported-by: Scott Milliken <scott@deltaex.com> Author: Masahiko Sawada <sawada.mshk@gmail.com>
2017-08-02Remove duplicate setting of SSL_OP_SINGLE_DH_USE option.Tom Lane
Commit c0a15e07c moved the setting of OpenSSL's SSL_OP_SINGLE_DH_USE option into a new subroutine initialize_dh(), but forgot to remove it from where it was. SSL_CTX_set_options() is a trivial function, amounting indeed to just "ctx->options |= op", hence there's no reason to contort the code or break separation of concerns to avoid calling it twice. So separating the DH setup from disabling of old protocol versions is a good change, but we need to finish the job. Noted while poking into the question of SSL session tickets.
2017-08-02Fix OBJECT_TYPE/OBJECT_DOMAIN confusionPeter Eisentraut
This doesn't have a significant impact except that now SECURITY LABEL ON DOMAIN rejects types that are not domains. Reported-by: 高增琦 <pgf00a@gmail.com>
2017-08-02Make temporary tables use shared storage on datanodesPavan Deolasee
Since a temporary table may be accessed by multiple backends on a datanode, XL mostly treats such tables as regular tables. But the technique that was used to distingush between temporary tables that may need shared storage vs those which are accessed only by a single backend, wasn't very full proof. We were relying on global session activation to make that distinction. This clearly fails when a background process, such as autovacuuum process, tries to figure out whether a table is using local or shared storage. This was leading to various problems, such as, when the underlying file system objects for the table were getting cleaned up, but without first discarding all references to the table from the shared buffers. We now make all temp tables to use shared storage on the datanodes and thus simplify things. Only EXECUTE DIRECT anyways does not set up global session, so I don't think this will have any meaningful impact on the performance. This should fix the checkpoint failures during regression tests.
2017-08-02Revert test case added by commit 1e165d05fe06a9072867607886f818bc255507db.Tom Lane
The buildfarm is still showing at least three distinct behaviors for a bad locale name in CREATE COLLATION. Although this test was helpful for getting the error reporting code into some usable shape, it doesn't seem worth carrying multiple expected-files in order to support the test in perpetuity. So pull it back out. Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
2017-08-01Second try at getting useful errors out of newlocale/_create_locale.Tom Lane
The early buildfarm returns for commit 1e165d05f are pretty awful: not only does Windows not return a useful error, but it looks like a lot of Unix-ish platforms don't either. Given the number of different errnos seen so far, guess that what's really going on is that some newlocale() implementations fail to set errno at all. Hence, let's try zeroing errno just before newlocale() and then if it's still zero report as though it's ENOENT. That should cover the Windows case too. It's clear that we'll have to drop the regression test case, unless we want to maintain a separate expected-file for platforms without HAVE_LOCALE_T. But I'll leave it there awhile longer to see if this actually improves matters or not. Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
2017-08-01Suppress less info in regression tests using DROP CASCADE.Tom Lane
DROP CASCADE doesn't currently promise to visit dependent objects in a fixed order, so when the regression tests use it, we typically need to suppress the details of which objects get dropped in order to have predictable test output. Traditionally we've done that by setting client_min_messages higher than NOTICE, but there's a better way: we can "\set VERBOSITY terse" in psql. That suppresses the DETAIL message with the object list, but we still get the basic notice telling how many objects were dropped. So at least the test case can verify that the expected number of objects were dropped. The VERBOSITY method was already in use in a few places, but run around and use it wherever it makes sense. Discussion: https://postgr.es/m/10766.1501608885@sss.pgh.pa.us
2017-08-01Try to deliver a sane message for _create_locale() failure on Windows.Tom Lane
We were just printing errno, which is certainly not gonna work on Windows. Now, it's not entirely clear from Microsoft's documentation whether _create_locale() adheres to standard Windows error reporting conventions, but let's assume it does and try to map the GetLastError result to an errno. If this turns out not to work, probably the best thing to do will be to assume the error is always ENOENT on Windows. This is a longstanding bug, but given the lack of previous field complaints, I'm not excited about back-patching it. Per report from Murtuza Zabuawala. Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
2017-08-01doc: Fix typoPeter Eisentraut
Author: Fabien COELHO <coelho@cri.ensmp.fr>
2017-08-01Allow creation of C/POSIX collations without depending on libc behavior.Tom Lane
Most of our collations code has special handling for the locale names "C" and "POSIX", allowing those collations to be used whether or not the system libraries think those locale names are valid, or indeed whether said libraries even have any locale support. But we missed handling things that way in CREATE COLLATION. This meant you couldn't clone the C/POSIX collations, nor explicitly define a new collation using those locale names, unless the libraries allow it. That's pretty pointless, as well as being a violation of pg_newlocale_from_collation's API specification. The practical effect of this change is quite limited: it allows creating such collations even on platforms that don't HAVE_LOCALE_T, and it allows making "POSIX" collation objects on Windows, which before this would only let you make "C" collation objects. Hence, even though this is a bug fix IMO, it doesn't seem worth the trouble to back-patch. In passing, suppress the DROP CASCADE detail messages at the end of the collation regression test. I'm surprised we've never been bit by message ordering issues there. Per report from Murtuza Zabuawala. Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
2017-08-01Further improve consistency of configure's program searching.Tom Lane
Peter Eisentraut noted that commit 40b9f1921 had broken a configure behavior that some people might rely on: AC_CHECK_PROGS(FOO,...) will allow the search to be overridden by specifying a value for FOO on configure's command line or in its environment, but AC_PATH_PROGS(FOO,...) accepts such an override only if it's an absolute path. We had worked around that behavior for some, but not all, of the pre-existing uses of AC_PATH_PROGS by just skipping the macro altogether when FOO is already set. Let's standardize on that workaround for all uses of AC_PATH_PROGS, new and pre-existing, by wrapping AC_PATH_PROGS in a new macro PGAC_PATH_PROGS. While at it, fix a deficiency of the old workaround code by making sure we report the setting to configure's log. Eventually I'd like to improve PGAC_PATH_PROGS so that it converts non-absolute override inputs to absolute form, eg "PYTHON=python3" becomes, say, PYTHON = /usr/bin/python3. But that will take some nontrivial coding so it doesn't seem like a thing to do in late beta. Discussion: https://postgr.es/m/90a92a7d-938e-507a-3bd7-ecd2b4004689@2ndquadrant.com
2017-08-01Comment fix for partition_rbound_cmp().Dean Rasheed
This was an oversight in d363d42. Beena Emerson
2017-07-31Fix comment.Tatsuo Ishii
XLByteToSeg and XLByteToPrevSeg calculate only a segment number. The definition of these macros were modified by commit dfda6ebaec6763090fb78b458a979b558c50b39b but the comment remain unchanged. Patch by Yugo Nagata. Back patched to 9.3 and beyond.
2017-07-31Fix typoPeter Eisentraut
Author: Masahiko Sawada <sawada.mshk@gmail.com>
2017-07-31Fix typoPeter Eisentraut
Author: Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>
2017-07-31Doc: add v10 release notes entries for the DH parameter changes.Heikki Linnakangas
2017-07-31Always use 2048 bit DH parameters for OpenSSL ephemeral DH ciphers.Heikki Linnakangas
1024 bits is considered weak these days, but OpenSSL always passes 1024 as the key length to the tmp_dh callback. All the code to handle other key lengths is, in fact, dead. To remedy those issues: * Only include hard-coded 2048-bit parameters. * Set the parameters directly with SSL_CTX_set_tmp_dh(), without the callback * The name of the file containing the DH parameters is now a GUC. This replaces the old hardcoded "dh1024.pem" filename. (The files for other key lengths, dh512.pem, dh2048.pem, etc. were never actually used.) This is not a new problem, but it doesn't seem worth the risk and churn to backport. If you care enough about the strength of the DH parameters on old versions, you can create custom DH parameters, with as many bits as you wish, and put them in the "dh1024.pem" file. Per report by Nicolas Guini and Damian Quiroga. Reviewed by Michael Paquier. Discussion: https://www.postgresql.org/message-id/CAMxBoUyjOOautVozN6ofzym828aNrDjuCcOTcCquxjwS-L2hGQ@mail.gmail.com
2017-07-31Doc: specify that the minimum supported version of Perl is 5.8.3.Tom Lane
Previously the docs just said "5.8 or later". Experimentation shows that while you can build on Unix from a git checkout with 5.8.0, compiling recent PL/Perl requires at least 5.8.1, and you won't be able to run the TAP tests with less than 5.8.3 because that's when they added "prove". (I do not have any information on just what the MSVC build scripts require.) Since all these versions are quite ancient, let's not split hairs in the docs, but just say that 5.8.3 is the minimum requirement. Discussion: https://postgr.es/m/16894.1501392088@sss.pgh.pa.us
2017-07-31Record full paths of programs sought by "configure".Tom Lane
Previously we had a mix of uses of AC_CHECK_PROG[S] and AC_PATH_PROG[S]. The only difference between those macros is that the latter emits the full path to the program it finds, eg "/usr/bin/prove", whereas the former emits just "prove". Let's standardize on always emitting the full path; this is better for documentation of the build, and it might prevent some types of failures if later build steps are done with a different PATH setting. I did not touch the AC_CHECK_PROG[S] calls in ax_pthread.m4 and ax_prog_perl_modules.m4. There seems no need to make those diverge from upstream, since we do not record the programs sought by the former, while the latter's call to AC_CHECK_PROG(PERL,...) will never be reached. Discussion: https://postgr.es/m/25937.1501433410@sss.pgh.pa.us
2017-07-31Tighten coding for non-composite case in plperl's return_next.Tom Lane
Coverity complained about this code's practice of using scalar variables as single-element arrays. While that's really just nitpicking, it probably is more readable to declare them as arrays, so let's do that. A more important point is that the code was just blithely assuming that the result tupledesc has exactly one column; if it doesn't, we'd likely get a crash of some sort in tuplestore_putvalues. Since the tupledesc is manufactured outside of plperl, that seems like an uncomfortably long chain of assumptions. We can nail it down at little cost with a sanity check earlier in the function.
2017-07-31Fix function comment for dumpACL()Stephen Frost
The comment for dumpACL() got neglected when initacls and initracls were added and the discussion of what 'racls' is wasn't very clear either. Per complaint from Tom.
2017-07-31Don't run ALTER ENUM in an autocommit block on remote nodesPavan Deolasee
Before PG 10, Postgres did not allow ALTER ENUM to be run inside a transaction block. So we used to run these commands in auto-commit mode on the remote nodes. But now Postgres has removed the restriction. So we also run the statements in transaction block. This fixes regression failures in the 'enum' test case.
2017-07-31Copy distribution information correctly to ProjectSet pathPavan Deolasee
ProjectSet is a new path type in PG 10 and we'd missed to copy the distribution information correctly to the path. This was resulting in failures in many regression test cases. Lack of distribution information, prevented the distributed query planner from adding a Remote Subplan node on top of the plan, thus resulting in local execution of the plan. Since the underlying table is actually a distributed table, local execution fails to fetch any data. Fix this by properly copying distribution info. Several regression failures are fixed automatically with this patch.
2017-07-31Add missing comment in postgresql.conf.Tatsuo Ishii
current_source requires to restart server to reflect the new value. Per Yugo Nagata and Masahiko Sawada. Back patched to 9.2 and beyond.
2017-07-31Add missing comment in postgresql.conf.Tatsuo Ishii
dynamic_shared_memory_type requires to restart server to reflect the new value. Per Yugo Nagata and Masahiko Sawada. Back pached to 9.4 and beyond.
2017-07-31Add missing comment in postgresql.conf.Tatsuo Ishii
max_logical_replication_workers requires to restart server to reflect the new value. Per Yugo Nagata. Minor editing by me.
2017-07-31Partially accept plan changes in updatable_viewsTomas Vondra
Upstream commit 215b43cdc8d6b4a1700886a39df1ee735cb0274d significantly reworked planning of leaky functions. In practice that change means we no longer have to push leaky functions into a subquery. Which greatly simplifies some plans, including the two in this patch. This commit accepts the plans only partially, though. It uses the plans from upstream, and adds a Remote Subquery Scan node at the top, so we accept the general plan shape change. But there are a few additional differences that need futher evaluation, particularly in target lists (Postgres-XL generating more entries than upstream) and SubPlans (Postgres-XL only generating one subplan, while upstream generates two alternative ones).
2017-07-31Accept aggregation plan changes in xc_remote testsTomas Vondra
The plans changed mainly due to abandoning the custom implementation two-phase aggregation code, and using the upstream parallel aggregation. That means we have stopped showing schema name in target lists, so instead of Output: pg_catalog.avg((avg(xcrem_employee.salary))) the EXPLAIN now shows Output: avg(xcrem_employee.salary) and we also do projection at the scan nodes, so the target list only shows the necessary subset of columns. A somewhat surprising change is that the plans switch from distributed aggregate plans like this one -> Aggregate -> Remote Subquery Scan -> Aggregate -> Seq Scan to always performing simple (non-distributed) aggregate like this -> Aggregate -> Remote Subquery Scan -> Seq Scan This happens due to create_grouping_paths() relying on consider_parallel flag when setting try_distributed_aggregate, disabling distributed aggregation when consider_parallel=false. Both affected plans are however for UPDATE queries, and PostgreSQL disables parallelism for queries that do writes, so we end up with try_distributed_aggregate=false. We should probably enable distributed aggregates in these cases, but we can't ignore consider_parallel entirely, as we likely need some of the checks. We will probably end up with consider_distributed flag, set in a similar way to consider_parallel, but that's more an enhancement than a bug fix.
2017-07-31Reject SQL functions containing utility statementsTomas Vondra
The check was not effective for the same reason as 5a54abb7acd, that is not accounting for XL wrapping the original command into RawStmt. Fix that by checking parsetree->stmt, and also add an assert checking we actually got a RawStmt in the first place.
2017-07-31Produce proper error message for COPY (SELECT INTO)Tomas Vondra
Produce the right error message for COPY (SELECT INTO) queries, that is ERROR: COPY (SELECT INTO) is not supported instead of the incorrect ERROR: COPY query must have a RETURNING clause The root cause is that the check in BeginCopy() was testing raw_query, but XL wraps the original command in RawStmt, so we should be checking raw_query->stmt instead.
2017-07-31Add explicit VACUUM to inet test to actually do IOSTomas Vondra
Some of the queries in inet test are meant to exercise Index Only Scans. Postgres-XL was not however picking those plans due to stale stats on the coordinator (reltuples and relpages in pg_class). On plain PostgreSQL the tests work fine, as CREATE INDEX also updates statistics stored in the pg_class catalog. For example this CREATE TABLE t (a INT); INSERT INTO t SELECT i FROM generate_series(1,1000) s(i); SELECT relpages, reltuples FROM pg_class WHERE relname = 't'; CREATE INDEX ON t(a); SELECT relpages, reltuples FROM pg_class WHERE relname = 't'; will show zeroes before the CREATE INDEX command, and accurate values after it completes. On Postgres-XL that is not the case, and we will return zeroes even after the CREATE INDEX command. To actually update the statistics we need to fetch information from the datanodes the way VACUUM does it. Fixed by adding an explicit VACUUM call right after the CREATE INDEX, to fetch the stats from the datanodes and update the coordinator catalogs.
2017-07-31Tweak the query plan check in join regression testTomas Vondra
The test expects the plan to use Index Scan, but with 1000 rows the differences are very small. With two data nodes, we however compute the estimates as if the tables had 500 rows, making the cost difference even smaller. Fixed by increasing the total number of rows to 2000, which means each datanode has about 1000 and uses the same cost estimates as upstream.
2017-07-31Remove extra snprintf call in pg_tablespace_databasesTomas Vondra
The XL code did two function calls in the else branch, about like this: else /* Postgres-XC tablespaces also include node name in path */ sprintf(fctx->location, "pg_tblspc/%u/%s_%s", tablespaceOid, TABLESPACE_VERSION_DIRECTORY, PGXCNodeName); fctx->location = psprintf("pg_tblspc/%u/%s_%s", tablespaceOid, TABLESPACE_VERSION_DIRECTORY, PGXCNodeName); which is wrong, as only the first call is actually the else branch, the second call is executed unconditionally. In fact, the two calls attempt to construct the same location string, but the sprintf call assumes the 'fctx->location' string is already allocated. But it actually is not, so it's likely to cause a segfault. Fixed by removing the sprintf() call, keeping just the psprintf() one. Noticed thanks to GCC 6.3 complaining about incorrect indentation. Backpatch to XL 9.5.
2017-07-31Fix confusing indentation in gtm_client.cTomas Vondra
GCC 6.3 complains that the indentation in gtm_sync_standby() is somewhat confusing, as it might mislead people to think that a command is part of an if branch. So fix that by removing the unnecessary indentation.
2017-07-31Refactor the construction of distributed grouping pathsTomas Vondra
The code generating distributed grouping paths was originally structured like this: if (try_distributed_aggregation) { ... } if (can_sort && try_distributed_aggregation) { ... } if (can_hash && try_distributed_aggregation) { ... } It's refactored like this, to resemble the upstream part of the code: if (try_distributed_aggregation) { ... if (can_sort) { ... } if (can_hash) { ... } }
2017-07-31Accept plan change in xc_groupby regression testTomas Vondra
The plan changed in two ways. Firstly, the targetlists changed due to abandoning the custom distributed aggregation and reusing the upstream partial aggregation code. That means we're not prefixing the aggregate with schema name, etc. The plan also switches from distributed aggregation to plain aggregation with all the work done on top of a remote query. This happens simply due to costing, as the tables are tiny and two-phase aggregation has some overhead. The original implementation (as in XL 9.5) distributed the aggregate unconditionally, ignoring the costing. Parf of the problem is that the query groups by two columns from two different tables, resulting in overestimation of the number of groups. That means the optimizer thinks distributing the aggregation would not reduce the number of rows, which increases the cost estimate as each row requires network transfer and the finalize aggregate also depends on the number of input rows. We could make the tables larger and the optimizer would eventually switch to distributed aggregate. For example this seems to do the trick: insert into xc_groupby_tab1 select 1, mod(i,1000) from generate_series(1,20000) s(i); insert into xc_groupby_tab2 select 1, mod(i,1000) from generate_series(1,20000) s(i); But it does not seem worth it, considering it's just a workaround for the estimation issue and the increased duration. And we already have other regression tests testing plausible queries benefiting from distributed aggregation. So just accept the plan change.
2017-07-30Move ExecProcNode from dispatch to function pointer based model.Andres Freund
This allows us to add stack-depth checks the first time an executor node is called, and skip that overhead on following calls. Additionally it yields a nice speedup. While it'd probably have been a good idea to have that check all along, it has become more important after the new expression evaluation framework in b8d7f053c5c2bf2a7e - there's no stack depth check in common paths anymore now. We previously relied on ExecEvalExpr() being executed somewhere. We should move towards that model for further routines, but as this is required for v10, it seems better to only do the necessary (which already is quite large). Author: Andres Freund, Tom Lane Reported-By: Julien Rouhaud Discussion: https://postgr.es/m/22833.1490390175@sss.pgh.pa.us https://postgr.es/m/b0af9eaa-130c-60d0-9e4e-7a135b1e0c76@dalibo.com
2017-07-30Move interrupt checking from ExecProcNode() to executor nodes.Andres Freund
In a followup commit ExecProcNode(), and especially the large switch it contains, will largely be replaced by a function pointer directly to the correct node. The node functions will then get invoked by a thin inline function wrapper. To avoid having to include miscadmin.h in headers - CHECK_FOR_INTERRUPTS() - move the interrupt checks into the individual executor routines. While looking through all executor nodes, I noticed a number of arguably missing interrupt checks, add these too. Author: Andres Freund, Tom Lane Reviewed-By: Tom Lane Discussion: https://postgr.es/m/22833.1490390175@sss.pgh.pa.us
2017-07-28Include publication owner's name in the output of \dRp+.Tom Lane
Without this, \dRp prints information that \dRp+ does not, which seems pretty odd. Daniel Gustafsson Discussion: https://postgr.es/m/3641F19B-336A-431A-86CE-A80562505C5E@yesql.se
2017-07-28PL/Perl portability fix: absorb relevant -D switches from Perl.Tom Lane
The Perl documentation is very clear that stuff calling libperl should be built with the compiler switches shown by Perl's $Config{ccflags}. We'd been ignoring that up to now, and mostly getting away with it, but recent Perl versions contain ABI compatibility cross-checks that fail on some builds because of this omission. In particular the sizeof(PerlInterpreter) can come out different due to some fields being added or removed; which means we have a live ABI hazard that we'd better fix rather than continuing to sweep it under the rug. However, it still seems like a bad idea to just absorb $Config{ccflags} verbatim. In some environments Perl was built with a different compiler that doesn't even use the same switch syntax. -D switch syntax is pretty universal though, and absorbing Perl's -D switches really ought to be enough to fix the problem. Furthermore, Perl likes to inject stuff like -D_LARGEFILE_SOURCE and -D_FILE_OFFSET_BITS=64 into $Config{ccflags}, which affect libc ABIs on platforms where they're relevant. Adopting those seems dangerous too. It's unclear whether a build wherein Perl and Postgres have different ideas of sizeof(off_t) etc would work, or whether anyone would care about making it work. But it's dead certain that having different stdio ABIs in core Postgres and PL/Perl will not work; we've seen that movie before. Therefore, let's also ignore -D switches for symbols beginning with underscore. The symbols that we actually need to import should be the ones mentioned in perl.h's PL_bincompat_options stanza, and none of those start with underscore, so this seems likely to work. (If it turns out not to work everywhere, we could consider intersecting the symbols mentioned in PL_bincompat_options with the -D switches. But that will be much more complicated, so let's try this way first.) This will need to be back-patched, but first let's see what the buildfarm makes of it. Ashutosh Sharma, some adjustments by me Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
2017-07-28PL/Perl portability fix: avoid including XSUB.h in plperl.c.Tom Lane
In Perl builds that define PERL_IMPLICIT_SYS, XSUB.h defines macros that replace a whole lot of basic libc functions with Perl functions. We can't tolerate that in plperl.c; it breaks at least PG_TRY and probably other stuff. The core idea of this patch is to include XSUB.h only in the .xs files where it's really needed, and to move any code broken by PERL_IMPLICIT_SYS out of the .xs files and into plperl.c. The reason this hasn't been a problem before is that our build techniques did not result in PERL_IMPLICIT_SYS appearing as a #define in PL/Perl, even on some platforms where Perl thinks it is defined. That's about to change in order to fix a nasty portability issue, so we need this work to make the code safe for that. Rather unaccountably, the Perl people chose XSUB.h as the place to provide the versions of the aTHX/aTHX_ macros that are needed by code that's not explicitly aware of the MULTIPLICITY API conventions. Hence, just removing XSUB.h from plperl.c fails miserably. But we can work around that by defining PERL_NO_GET_CONTEXT (which would make the relevant stanza of XSUB.h a no-op anyway). As explained in perlguts.pod, that means we need to add a "dTHX" macro call in every C function that calls a Perl API function. In most of them we just add this at the top; but since the macro fetches the current Perl interpreter pointer, more care is needed in functions that switch the active interpreter. Lack of the macro is easily recognized since it results in bleats about "my_perl" not being defined. (A nice side benefit of this is that it significantly reduces the number of fetches of the current interpreter pointer. On my machine, plperl.so gets more than 10% smaller, and there's probably some performance win too. We could reduce the number of fetches still more by decorating the code with pTHX_/aTHX_ macros to pass the interpreter pointer around, as explained by perlguts.pod; but that's a task for another day.) Formatting note: pgindent seems happy to treat "dTHX;" as a declaration so long as it's the first thing after the left brace, as we'd already observed with respect to the similar macro "dSP;". If you try to put it later in a set of declarations, pgindent puts ugly extra space around it. Having removed XSUB.h from plperl.c, we need only move the support functions for spi_return_next and util_elog (both of which use PG_TRY) out of the .xs files and into plperl.c. This seems sufficient to avoid the known problems caused by PERL_IMPLICIT_SYS, although we could move more code if additional issues emerge. This will need to be back-patched, but first let's see what the buildfarm makes of it. Patch by me, with some help from Ashutosh Sharma Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
2017-07-27Fix psql tab completion for CREATE USER MAPPING.Tom Lane
After typing CREATE USER M..., it would not fill in MAPPING FOR, even though that was clearly intended behavior. Jeff Janes Discussion: https://postgr.es/m/CAMkU=1wo2iQ6jWnN=egqOb5NxEPn0PpANEtKHr3uPooQ+nYPtw@mail.gmail.com
2017-07-27Standardize describe.c's behavior for no-matching-objects a bit more.Tom Lane
Most functions in this file are content to print an empty table if there are no matching objects. In some, the behavior is to loop over all matching objects and print a table for each one; therefore, without any extra logic, nothing at all would be printed if no objects match. We accept that outcome in QUIET mode, but in normal mode it seems better to print a helpful message. The new \dRp+ command had not gotten that memo; fix it. listDbRoleSettings() is out of step on this, but I think it's better for it to print a custom message rather than an empty table, because of the possibility that the user is confused about what the pattern arguments mean or which is which. The original message wording was entirely useless for clarifying that, though, not to mention being unlike the wordings used elsewhere. Improve the text, and also print the messages with psql_error as is the general custom here. listTables() is also out in left field, but since it's such a heavily used function, I'm hesitant to change its behavior so much as to print an empty table rather than a custom message. People are probably used to getting a message. But we can make the wording more standardized and helpful, and print it with psql_error rather than printing to stdout. In both listDbRoleSettings and listTables, we play dumb and emit an empty table, not a custom message, in QUIET mode. That was true before and I see no need to change it. Several of the places printing such messages risked dumping core if no pattern string had been provided; make them more wary. (This case is presently unreachable in describeTableDetails; but it shouldn't be assuming that command.c will never pass it a null. The text search functions would only reach the case if a database contained no text search objects, which is also currently impossible since we pin the built-in objects, but again it seems unwise to assume that here.) Daniel Gustafsson, tweaked a bit by me Discussion: https://postgr.es/m/3641F19B-336A-431A-86CE-A80562505C5E@yesql.se
2017-07-27Avoid use of sprintf/snprintf in describe.c.Tom Lane
Most places were already using the PQExpBuffer library for constructing variable-length strings; bring the two stragglers into line. describeOneTSParser was living particularly dangerously since it wasn't even using snprintf(). Daniel Gustafsson Discussion: https://postgr.es/m/3641F19B-336A-431A-86CE-A80562505C5E@yesql.se
2017-07-27Sync listDbRoleSettings() with the rest of the world.Tom Lane
listDbRoleSettings() handled its server version check randomly differently from every other comparable function in describe.c, not only as to code layout but also message wording. It also leaked memory, because its PQExpBuffer management was also unlike everyplace else (and wrong). Also fix an error-case leak in add_tablespace_footer(). In passing, standardize the format of function header comments in describe.c --- we usually put "/*" alone on a line. Daniel Gustafsson, memory leak fixes by me Discussion: https://postgr.es/m/3641F19B-336A-431A-86CE-A80562505C5E@yesql.se
2017-07-27Fix very minor memory leaks in psql's command.c.Tom Lane
\drds leaked its second pattern argument if any, and \connect leaked any empty-string or "-" arguments. These are old bugs, but it's hard to imagine any real use-case where the leaks could amount to anything meaningful, so not bothering with a back-patch. Daniel Gustafsson and Tom Lane Discussion: https://postgr.es/m/3641F19B-336A-431A-86CE-A80562505C5E@yesql.se
2017-07-27Work around Msys weakness in Testlib.pm's command_like()Andrew Dunstan
When output of IPC::Run::run () is redirected to scalar references, in certain circumstances the Msys perl does not correctly detect that the end of file has been seen, making the test hang indefinitely. One such circumstance is when the command is 'pg_ctl start', and such a change was made in commit f13ea95f9e. The workaround, which only applies on MSys, is to redirect the output to temporary files and then read them in when the process has finished. Patch by me, reviewed and tweaked by Tom Lane.
2017-07-26Clean up SQL emitted by psql/describe.c.Tom Lane
Fix assorted places that had not bothered with the convention of prefixing catalog and function names with "pg_catalog.". That could possibly result in query failure when running with a nondefault search_path. Also fix two places that weren't quoting OID literals. I think the latter hasn't mattered much since about 7.3, but it's still a bad idea to be doing it in 99 places and not in 2 others. Also remove a useless EXISTS sub-select that someone had stuck into describeOneTableDetails' queries for child tables. We just got the OID out of pg_class, so I hardly see how checking that it exists in pg_class was doing anything helpful. In passing, try to improve the emitted formatting of a couple of these queries, though I didn't work really hard on that. And merge unnecessarily duplicative coding in some other places. Much of this was new in HEAD, but some was quite old; back-patch as appropriate.