summaryrefslogtreecommitdiff
path: root/contrib
AgeCommit message (Collapse)Author
2016-11-08Replace uses of SPI_modifytuple that intend to allocate in current context.Tom Lane
Invent a new function heap_modify_tuple_by_cols() that is functionally equivalent to SPI_modifytuple except that it always allocates its result by simple palloc. I chose however to make the API details a bit more like heap_modify_tuple: pass a tupdesc rather than a Relation, and use bool convention for the isnull array. Use this function in place of SPI_modifytuple at all call sites where the intended behavior is to allocate in current context. (There actually are only two call sites left that depend on the old behavior, which makes me wonder if we should just drop this function rather than keep it.) This new function is easier to use than heap_modify_tuple() for purposes of replacing a single column (or, really, any fixed number of columns). There are a number of places where it would simplify the code to change over, but I resisted that temptation for the moment ... everywhere except in plpgsql's exec_assign_value(); changing that might offer some small performance benefit, so I did it. This is on the way to removing SPI_push/SPI_pop, but it seems like good code cleanup in its own right. Discussion: <9633.1478552022@sss.pgh.pa.us>
2016-11-08Make SPI_fnumber() reject dropped columns.Tom Lane
There's basically no scenario where it's sensible for this to match dropped columns, so put a test for dropped-ness into SPI_fnumber() itself, and excise the test from the small number of callers that were paying attention to the case. (Most weren't :-(.) In passing, normalize tests at call sites: always reject attnum <= 0 if we're disallowing system columns. Previously there was a mixture of "< 0" and "<= 0" tests. This makes no practical difference since SPI_fnumber() never returns 0, but I'm feeling pedantic today. Also, in the places that are actually live user-facing code and not legacy cruft, distinguish "column not found" from "can't handle system column". Per discussion with Jim Nasby; thi supersedes his original patch that just changed the behavior at one call site. Discussion: <b2de8258-c4c0-1cb8-7b97-e8538e5c975c@BlueTreble.com>
2016-11-07Revert "Delete contrib/xml2's legacy implementation of xml_is_well_formed()."Tom Lane
This partly reverts commit 20540710e83f2873707c284a0c0693f0b57156c4. Since we've given up on adding PGDLLEXPORT markers to PG_FUNCTION_INFO_V1, there's no need to remove the legacy compatibility function. I kept the documentation changes, though, as they seem appropriate anyway.
2016-11-07Revert "Provide DLLEXPORT markers for C functions via PG_FUNCTION_INFO_V1 ↵Tom Lane
macro." This reverts commit c8ead2a3974d3eada145a0e18940150039493cc9. Seems there is no way to do this that doesn't cause MSVC to give warnings, so let's just go back to the way we've been doing it. Discussion: <11843.1478358206@sss.pgh.pa.us>
2016-11-04Provide DLLEXPORT markers for C functions via PG_FUNCTION_INFO_V1 macro.Tom Lane
Second try at the change originally made in commit 8518583cd; this time with contrib updates so that manual extern declarations are also marked with PGDLLEXPORT. The release notes should point this out as a significant source-code change for extension authors, since they'll have to make similar additions to avoid trouble on Windows. Laurenz Albe, doc change by me Patch: <A737B7A37273E048B164557ADEF4A58B53962ED8@ntex2010a.host.magwien.gv.at>
2016-11-04Delete contrib/xml2's legacy implementation of xml_is_well_formed().Tom Lane
This function is unreferenced in modern usage; it was superseded in 9.1 by a core function of the same name. It has been left in place in the C code only so that pre-9.1 SQL definitions of the contrib/xml2 functions would continue to work. Six years seems like enough time for people to have updated to the extension-style version of the xml2 module, so let's drop this. The key reason for not keeping it any longer is that we want to stick an explicit PGDLLEXPORT into PG_FUNCTION_INFO_V1(), and the similarity of name to the core function creates a conflict that compilers will complain about. Extracted from a larger patch for that purpose. I'm committing this change separately to give it more visibility in the commit logs. While at it, remove the documentation entry that claimed that xml_is_well_formed() is a function provided by contrib/xml2, and instead mention the even more ancient alias xml_valid(). Laurenz Albe, doc change by me Patch: <A737B7A37273E048B164557ADEF4A58B53962ED8@ntex2010a.host.magwien.gv.at>
2016-11-04Fix gin_leafpage_items().Tom Lane
On closer inspection, commit 84ad68d64 broke gin_leafpage_items(), because the aligned copy of the page got palloc'd in a short-lived context whereas it needs to be in the SRF's multi_call_memory_ctx. This was not exposed by the regression test, because the regression test doesn't actually exercise the function in a meaningful way. Fix the code bug, and extend the test in what I hope is a portable fashion.
2016-11-04pageinspect: Fix unaligned struct access in GIN functionsPeter Eisentraut
The raw page data that is passed into the functions will not be aligned at 8-byte boundaries. Casting that to a struct and accessing int64 fields will result in unaligned access. On most platforms, you get away with it, but it will result on a crash on pickier platforms such as ia64 and sparc64.
2016-11-04postgres_fdw: Fix typo in comment.Robert Haas
Etsuro Fujita
2016-11-03psql: Split up "Modifiers" column in \d and \dDPeter Eisentraut
Make separate columns "Collation", "Nullable", "Default". Reviewed-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com>
2016-11-03Use NIL instead of NULL for an empty List.Robert Haas
Tatsuro Yamada, reviewed by Ashutosh Bapat
2016-11-02Don't convert Consts into Vars during setrefs.c processing.Tom Lane
While converting expressions in an upper-level plan node so that they reference Vars and expressions provided by the input plan node(s), don't convert plain Const items, even if there happens to be a matching Const in the input. It's silly to do so because a Var is more expensive to execute than a Const. Moreover, converting can fool ExecCheckPlanOutput's check that an insert or update query inserts nulls into dropped columns, leading to "query provides a value for a dropped column" errors during INSERT or UPDATE on a table with a dropped column. We could solve this by making that check more complicated, but I don't see the point; this fix should save a marginal number of cycles, and it also makes for less messy EXPLAIN output, as shown by the ensuing regression test result changes. Per report from Pavel Hanák. I have not incorporated a test case based on that example, as there doesn't seem to be a simple way of checking this in isolation without making a bunch of assumptions about other planner and SQL-function behavior. Back-patch to 9.6. This setrefs.c behavior exists much further back, but there is not currently reason to think that it causes problems before 9.6. Discussion: <83shraampf.fsf@is-it.eu>
2016-11-02pageinspect: Make page test more portablePeter Eisentraut
Choose test data that makes the output independent of endianness.
2016-11-02Fix portability bug in gin_page_opaque_info().Tom Lane
Somebody apparently thought that "if Int32GetDatum is good, Int64GetDatum must be better". Per buildfarm failures now that Peter has added some regression tests here.
2016-11-02pageinspect: Make btree test more portablePeter Eisentraut
Choose test data that makes the output independent of endianness and alignment.
2016-11-01postgres_fdw: Fix typo in comment.Robert Haas
Etsuro Fujita
2016-11-01pageinspect: Add testsPeter Eisentraut
2016-10-28pgstattuple: Don't take heavyweight locks when examining a hash index.Robert Haas
It's currently necessary to take a heavyweight lock when scanning a hash bucket, but pgstattuple only examines individual pages, so it doesn't need to do this. If, for some hypothetical reason, it did need to do any heavyweight locking here, this logic would probably still be incorrect, because most of the locks that it is taking are meaningless. Only a heavyweight lock on a primary bucket page has any meaning, but this takes heavyweight locks on all pages regardless of function - and in particular overflow pages, where you might imagine that we'd want to lock the primary bucket page if we needed to lock anything at all. This is arguably a bug that has existed since this code was added in commit dab42382f483c3070bdce14a4d93c5d0cf61e82b, but I'm not going to bother back-patching it because in most cases the only consequence is that running pgstattuple() on a hash index is a little slower than it otherwise might be, which is no big deal. Extracted from a vastly larger patch by Amit Kapila which heavyweight locking for hash indexes entirely; analysis of why this can be done independently of the rest by me.
2016-10-27Merge commit 'b5bce6c1ec6061c8a4f730d927e162db7e2ce365'Pavan Deolasee
2016-10-26Suppress unused-variable warning in non-assert builds.Tom Lane
Introduced in commit 7012b132d. Kyotaro Horiguchi
2016-10-26Fix typo in comment.Heikki Linnakangas
Daniel Gustafsson
2016-10-25postgres_fdw: Try again to stabilize aggregate pushdown regression tests.Robert Haas
A query that only aggregates one row isn't a great argument for pushdown, and buildfarm member brolga decides against it. Adjust the query a bit in the hopes of getting remote aggregation to win consistently. Jeevan Chalke, per suggestion from Tom Lane
2016-10-21postgres_fdw: Attempt to stabilize regression results.Robert Haas
Set enable_hashagg to false for tests involving least_agg(), so that we get the same plan regardless of local costing variances. Also, remove a test involving sqrt(); it's there to test deparsing of HAVING clauses containing expressions, but that's tested elsewhere anyway, and sqrt(2) deparses with different amounts of precision on different machines. Per buildfarm.
2016-10-21postgres_fdw: Push down aggregates to remote servers.Robert Haas
Now that the upper planner uses paths, and now that we have proper hooks to inject paths into the upper planning process, it's possible for foreign data wrappers to arrange to push aggregates to the remote side instead of fetching all of the rows and aggregating them locally. This figures to be a massive win for performance, so teach postgres_fdw to do it. Jeevan Chalke and Ashutosh Bapat. Reviewed by Ashutosh Bapat with additional testing by Prabhat Sahu. Various mostly cosmetic changes by me.
2016-10-18Revert "Replace PostmasterRandom() with a stronger way of generating ↵Heikki Linnakangas
randomness." This reverts commit 9e083fd4683294f41544e6d0d72f6e258ff3a77c. That was a few bricks shy of a load: * Query cancel stopped working * Buildfarm member pademelon stopped working, because the box doesn't have /dev/urandom nor /dev/random. This clearly needs some more discussion, and a quite different patch, so revert for now.
2016-10-18Redesign tablesample method API, and do extensive code review.Tom Lane
The original implementation of TABLESAMPLE modeled the tablesample method API on index access methods, which wasn't a good choice because, without specialized DDL commands, there's no way to build an extension that can implement a TSM. (Raw inserts into system catalogs are not an acceptable thing to do, because we can't undo them during DROP EXTENSION, nor will pg_upgrade behave sanely.) Instead adopt an API more like procedural language handlers or foreign data wrappers, wherein the only SQL-level support object needed is a single handler function identified by having a special return type. This lets us get rid of the supporting catalog altogether, so that no custom DDL support is needed for the feature. Adjust the API so that it can support non-constant tablesample arguments (the original coding assumed we could evaluate the argument expressions at ExecInitSampleScan time, which is undesirable even if it weren't outright unsafe), and discourage sampling methods from looking at invisible tuples. Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable within and across queries, as required by the SQL standard, and deal more honestly with methods that can't support that requirement. Make a full code-review pass over the tablesample additions, and fix assorted bugs, omissions, infelicities, and cosmetic issues (such as failure to put the added code stanzas in a consistent ordering). Improve EXPLAIN's output of tablesample plans, too. Back-patch to 9.5 so that we don't have to support the original API in production.
2016-10-18Use version string from the server for pgxc_ctlPavan Deolasee
2016-10-18Correctly initialise coordMaxWALSenders and datanodeMaxWALSenders while addingPavan Deolasee
new nodes via pgxc_ctl
2016-10-18Make another attempt to fix vpath build for pgxc_ctl contrib modulePavan Deolasee
2016-10-18Fix VPATH build for contrib/pgxc_ctlPavan Deolasee
2016-10-18add pgxc_monitor to MakefileTomas Vondra
The pgxc_monitor contrib module was not included in the Makefile, thus vulnerable to undetected compile breakage. That's not desiable, so add it to the Makefile.
2016-10-18remove pghba contrib moduleTomas Vondra
The purpose of the module is unknown, and it does not even compile for quite some time, so apparently no one uses it. Instead of fixing it, let's remove it - if someone realizes it's useful, we can get it from history.
2016-10-18Add test case for Issue #70Pallavi Sontakke
Role concurrently dropped error
2016-10-18Add test case for Issue #43Pallavi Sontakke
Parallel analyze causes Unexpected response from the Datanodes
2016-10-18Add test case for Issue #81Pallavi Sontakke
Create empty cluster and add nodes multiple times to reproduce the issue. This issue occurs intermittently.
2016-10-18Correct expected behaior of test.Pallavi Sontakke
Test reproduces Issue #84 on crash recovery and prepared transactions.
2016-10-18Add TAP test for crash recovery Issue #84Pallavi Sontakke
Test crash recovery when prepared transactions are being created in the background. Tests #84.
2016-10-18Remove an unintentional "set -x" command slipped in the previous commitPavan Deolasee
2016-10-18Ensure "init all" (and other init commands too) does not remove existing dataPavan Deolasee
directories unless "force" option is used We'd tried to fix this earlier, but looks like double quote is not getting passed to the shell correctly. Instead use a single quote. Report by Pallavi Sontakke during QA testing.
2016-10-18Test no more uses 'start' command for gtm slavePallavi Sontakke
'pgxc_ctl start' command is no more needed to start gtm slave, with recent code changes.
2016-10-18Don't use special marker "none" while updating max_wal_senders inPavan Deolasee
postgresql.conf via pgxc_ctl. Instead use "0" if the variable is not set or set to "none"
2016-10-18Make 'help add' more explanatoryPallavi Sontakke
Help user to supply 'slave_name' in 'pgxc_ctl add gtm slave', different from others where original node name is expected. Fixes #85
2016-10-18Avoid removing directories for some pgxc_ctl calls, just as an added protectionPavan Deolasee
if user makes a mistake
2016-10-18Check for 'status' and not return value of waitpid() functionPavan Deolasee
2016-10-18Suppress the message hinting to start coordinator/datanode/gtm server at thePavan Deolasee
end of initdb/initgtm when the commands are run via pgxc_ctl This can be confusing to the user. We use an environment varibale PGXC_CTL_SILENT to silence the message instead of adding a new option.
2016-10-18Fix a typo in the log message during datanode failoverPavan Deolasee
2016-10-18Do not add a spurious ';' when not cleaning WAL directory for a datanodePavan Deolasee
2016-10-18Test: Change command to start GTM standby.Pallavi Sontakke
Use temporary PGXC_CTL_HOME for test.
2016-10-18Add test for GTM standbyPallavi Sontakke
2016-10-18Modify testsPallavi Sontakke
Remove cluster-cleanup at start. Extract PGXC_CTL_HOME from ENV.