From 99255d73c07c89b69be028a1a7b8027a78befed4 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 28 Jun 2017 12:30:16 -0400 Subject: Second try at fixing tcp_keepalives_idle option on Solaris. Buildfarm evidence shows that TCP_KEEPALIVE_THRESHOLD doesn't exist after all on Solaris < 11. This means we need to take positive action to prevent the TCP_KEEPALIVE code path from being taken on that platform. I've chosen to limit it with "&& defined(__darwin__)", since it's unclear that anyone else would follow Apple's precedent of spelling the symbol that way. Also, follow a suggestion from Michael Paquier of eliminating code duplication by defining a couple of intermediate symbols for the socket option. In passing, make some effort to reduce the number of translatable messages by replacing "setsockopt(foo) failed" with "setsockopt(%s) failed", etc, throughout the affected files. And update relevant documentation so that it doesn't claim to provide an exhaustive list of the possible socket option names. Like the previous commit (f0256c774), back-patch to all supported branches. Discussion: https://postgr.es/m/20170627163757.25161.528@wrigleys.postgresql.org --- doc/src/sgml/config.sgml | 19 ++++++++++--------- doc/src/sgml/libpq.sgml | 19 ++++++++++--------- 2 files changed, 20 insertions(+), 18 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 3aca6479b1..2485e6190d 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -856,8 +856,8 @@ include_dir 'conf.d' Specifies the number of seconds of inactivity after which TCP should send a keepalive message to the client. A value of 0 uses the system default. - This parameter is supported only on systems that support the - TCP_KEEPIDLE or TCP_KEEPALIVE symbols, and on + This parameter is supported only on systems that support + TCP_KEEPIDLE or an equivalent socket option, and on Windows; on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. @@ -882,9 +882,9 @@ include_dir 'conf.d' Specifies the number of seconds after which a TCP keepalive message that is not acknowledged by the client should be retransmitted. A value of 0 uses the system default. - This parameter is supported only on systems that support the - TCP_KEEPINTVL symbol, and on Windows; on other systems, it - must be zero. + This parameter is supported only on systems that support + TCP_KEEPINTVL or an equivalent socket option, and on + Windows; on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. @@ -906,10 +906,11 @@ include_dir 'conf.d' Specifies the number of TCP keepalives that can be lost before - the server's connection to the client is considered dead. A value of 0 - uses the system default. This parameter is - supported only on systems that support the TCP_KEEPCNT - symbol; on other systems, it must be zero. + the server's connection to the client is considered dead. + A value of 0 uses the system default. + This parameter is supported only on systems that support + TCP_KEEPCNT or an equivalent socket option; + on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index c3bd4f9b9b..1b3d55b8b9 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -1125,10 +1125,10 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Controls the number of seconds of inactivity after which TCP should send a keepalive message to the server. A value of zero uses the system default. This parameter is ignored for connections made via a - Unix-domain socket, or if keepalives are disabled. It is only supported - on systems where the TCP_KEEPIDLE or TCP_KEEPALIVE - socket option is available, and on Windows; on other systems, it has no - effect. + Unix-domain socket, or if keepalives are disabled. + It is only supported on systems where TCP_KEEPIDLE or + an equivalent socket option is available, and on Windows; on other + systems, it has no effect. @@ -1141,9 +1141,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname that is not acknowledged by the server should be retransmitted. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. - It is only supported on systems where the TCP_KEEPINTVL - socket option is available, and on Windows; on other systems, it has no - effect. + It is only supported on systems where TCP_KEEPINTVL or + an equivalent socket option is available, and on Windows; on other + systems, it has no effect. @@ -1156,8 +1156,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname client's connection to the server is considered dead. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. - It is only supported on systems where the TCP_KEEPCNT - socket option is available; on other systems, it has no effect. + It is only supported on systems where TCP_KEEPCNT or + an equivalent socket option is available; on other systems, it has no + effect. -- cgit v1.2.3 From 501ed02cf6f4f60c3357775eb07578aebc912d3a Mon Sep 17 00:00:00 2001 From: Andrew Gierth Date: Wed, 28 Jun 2017 18:55:03 +0100 Subject: Fix transition tables for partition/inheritance. We disallow row-level triggers with transition tables on child tables. Transition tables for triggers on the parent table contain only those columns present in the parent. (We can't mix tuple formats in a single transition table.) Patch by Thomas Munro Discussion: https://postgr.es/m/CA%2BTgmoZzTBBAsEUh4MazAN7ga%3D8SsMC-Knp-6cetts9yNZUCcg%40mail.gmail.com --- doc/src/sgml/ref/create_trigger.sgml | 14 ++ src/backend/catalog/pg_inherits.c | 24 +++ src/backend/commands/copy.c | 70 ++++++- src/backend/commands/tablecmds.c | 28 +++ src/backend/commands/trigger.c | 218 ++++++++++++++++---- src/backend/executor/execMain.c | 2 +- src/backend/executor/execReplication.c | 6 +- src/backend/executor/nodeModifyTable.c | 150 ++++++++++++-- src/include/catalog/pg_inherits_fn.h | 1 + src/include/commands/trigger.h | 45 +++- src/include/nodes/execnodes.h | 4 + src/test/regress/expected/triggers.out | 326 ++++++++++++++++++++++++++--- src/test/regress/sql/triggers.sql | 365 ++++++++++++++++++++++++++++++--- 13 files changed, 1143 insertions(+), 110 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index c5f7c75838..18efe6a9ed 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -458,6 +458,20 @@ UPDATE OF column_name1 [, column_name2 + + Modifying a partitioned table or a table with inheritance children fires + statement-level triggers directly attached to that table, but not + statement-level triggers for its partitions or child tables. In contrast, + row-level triggers are fired for all affected partitions or child tables. + If a statement-level trigger has been defined with transition relations + named by a REFERENCING clause, then before and after + images of rows are visible from all affected partitions or child tables. + In the case of inheritance children, the row images include only columns + that are present in the table that the trigger is attached to. Currently, + row-level triggers with transition relations cannot be defined on + partitions or inheritance child tables. + + In PostgreSQL versions before 7.3, it was necessary to declare trigger functions as returning the placeholder diff --git a/src/backend/catalog/pg_inherits.c b/src/backend/catalog/pg_inherits.c index e5fb52cfbf..245a374fc9 100644 --- a/src/backend/catalog/pg_inherits.c +++ b/src/backend/catalog/pg_inherits.c @@ -273,6 +273,30 @@ has_subclass(Oid relationId) return result; } +/* + * has_superclass - does this relation inherit from another? The caller + * should hold a lock on the given relation so that it can't be concurrently + * added to or removed from an inheritance hierarchy. + */ +bool +has_superclass(Oid relationId) +{ + Relation catalog; + SysScanDesc scan; + ScanKeyData skey; + bool result; + + catalog = heap_open(InheritsRelationId, AccessShareLock); + ScanKeyInit(&skey, Anum_pg_inherits_inhrelid, BTEqualStrategyNumber, + F_OIDEQ, ObjectIdGetDatum(relationId)); + scan = systable_beginscan(catalog, InheritsRelidSeqnoIndexId, true, + NULL, 1, &skey); + result = HeapTupleIsValid(systable_getnext(scan)); + systable_endscan(scan); + heap_close(catalog, AccessShareLock); + + return result; +} /* * Given two type OIDs, determine whether the first is a complex type diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 3c399e29db..a4c02e6b7c 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -171,6 +171,8 @@ typedef struct CopyStateData ResultRelInfo *partitions; /* Per partition result relation */ TupleConversionMap **partition_tupconv_maps; TupleTableSlot *partition_tuple_slot; + TransitionCaptureState *transition_capture; + TupleConversionMap **transition_tupconv_maps; /* * These variables are used to reduce overhead in textual COPY FROM. @@ -1436,6 +1438,36 @@ BeginCopy(ParseState *pstate, cstate->num_partitions = num_partitions; cstate->partition_tupconv_maps = partition_tupconv_maps; cstate->partition_tuple_slot = partition_tuple_slot; + + /* + * If there are any triggers with transition tables on the named + * relation, we need to be prepared to capture transition tuples + * from child relations too. + */ + cstate->transition_capture = + MakeTransitionCaptureState(rel->trigdesc); + + /* + * If we are capturing transition tuples, they may need to be + * converted from partition format back to partitioned table + * format (this is only ever necessary if a BEFORE trigger + * modifies the tuple). + */ + if (cstate->transition_capture != NULL) + { + int i; + + cstate->transition_tupconv_maps = (TupleConversionMap **) + palloc0(sizeof(TupleConversionMap *) * + cstate->num_partitions); + for (i = 0; i < cstate->num_partitions; ++i) + { + cstate->transition_tupconv_maps[i] = + convert_tuples_by_name(RelationGetDescr(cstate->partitions[i].ri_RelationDesc), + RelationGetDescr(rel), + gettext_noop("could not convert row type")); + } + } } } else @@ -2591,6 +2623,35 @@ CopyFrom(CopyState cstate) */ estate->es_result_relation_info = resultRelInfo; + /* + * If we're capturing transition tuples, we might need to convert + * from the partition rowtype to parent rowtype. + */ + if (cstate->transition_capture != NULL) + { + if (resultRelInfo->ri_TrigDesc && + (resultRelInfo->ri_TrigDesc->trig_insert_before_row || + resultRelInfo->ri_TrigDesc->trig_insert_instead_row)) + { + /* + * If there are any BEFORE or INSTEAD triggers on the + * partition, we'll have to be ready to convert their + * result back to tuplestore format. + */ + cstate->transition_capture->tcs_original_insert_tuple = NULL; + cstate->transition_capture->tcs_map = + cstate->transition_tupconv_maps[leaf_part_index]; + } + else + { + /* + * Otherwise, just remember the original unconverted + * tuple, to avoid a needless round trip conversion. + */ + cstate->transition_capture->tcs_original_insert_tuple = tuple; + cstate->transition_capture->tcs_map = NULL; + } + } /* * We might need to convert from the parent rowtype to the * partition rowtype. @@ -2703,7 +2764,7 @@ CopyFrom(CopyState cstate) /* AFTER ROW INSERT Triggers */ ExecARInsertTriggers(estate, resultRelInfo, tuple, - recheckIndexes); + recheckIndexes, cstate->transition_capture); list_free(recheckIndexes); } @@ -2856,7 +2917,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid, estate, false, NULL, NIL); ExecARInsertTriggers(estate, resultRelInfo, bufferedTuples[i], - recheckIndexes); + recheckIndexes, NULL); list_free(recheckIndexes); } } @@ -2866,14 +2927,15 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid, * anyway. */ else if (resultRelInfo->ri_TrigDesc != NULL && - resultRelInfo->ri_TrigDesc->trig_insert_after_row) + (resultRelInfo->ri_TrigDesc->trig_insert_after_row || + resultRelInfo->ri_TrigDesc->trig_insert_new_table)) { for (i = 0; i < nBufferedTuples; i++) { cstate->cur_lineno = firstBufferedLineNo + i; ExecARInsertTriggers(estate, resultRelInfo, bufferedTuples[i], - NIL); + NIL, NULL); } } diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 7d9c769b06..bb00858ad1 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -10933,6 +10933,7 @@ ATExecAddInherit(Relation child_rel, RangeVar *parent, LOCKMODE lockmode) Relation parent_rel; List *children; ObjectAddress address; + const char *trigger_name; /* * A self-exclusive lock is needed here. See the similar case in @@ -11014,6 +11015,19 @@ ATExecAddInherit(Relation child_rel, RangeVar *parent, LOCKMODE lockmode) RelationGetRelationName(child_rel), RelationGetRelationName(parent_rel)))); + /* + * If child_rel has row-level triggers with transition tables, we + * currently don't allow it to become an inheritance child. See also + * prohibitions in ATExecAttachPartition() and CreateTrigger(). + */ + trigger_name = FindTriggerIncompatibleWithInheritance(child_rel->trigdesc); + if (trigger_name != NULL) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("trigger \"%s\" prevents table \"%s\" from becoming an inheritance child", + trigger_name, RelationGetRelationName(child_rel)), + errdetail("ROW triggers with transition tables are not supported in inheritance hierarchies"))); + /* OK to create inheritance */ CreateInheritance(child_rel, parent_rel); @@ -13418,6 +13432,7 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) TupleDesc tupleDesc; bool skip_validate = false; ObjectAddress address; + const char *trigger_name; attachRel = heap_openrv(cmd->name, AccessExclusiveLock); @@ -13547,6 +13562,19 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) errdetail("New partition should contain only the columns present in parent."))); } + /* + * If child_rel has row-level triggers with transition tables, we + * currently don't allow it to become a partition. See also prohibitions + * in ATExecAddInherit() and CreateTrigger(). + */ + trigger_name = FindTriggerIncompatibleWithInheritance(attachRel->trigdesc); + if (trigger_name != NULL) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("trigger \"%s\" prevents table \"%s\" from becoming a partition", + trigger_name, RelationGetRelationName(attachRel)), + errdetail("ROW triggers with transition tables are not supported on partitions"))); + /* OK to create inheritance. Rest of the checks performed there */ CreateInheritance(attachRel, rel); diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index 45d1f515eb..f902e0cdf5 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -24,6 +24,7 @@ #include "catalog/objectaccess.h" #include "catalog/pg_constraint.h" #include "catalog/pg_constraint_fn.h" +#include "catalog/pg_inherits_fn.h" #include "catalog/pg_proc.h" #include "catalog/pg_trigger.h" #include "catalog/pg_type.h" @@ -96,7 +97,8 @@ static HeapTuple ExecCallTriggerFunc(TriggerData *trigdata, static void AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, int event, bool row_trigger, HeapTuple oldtup, HeapTuple newtup, - List *recheckIndexes, Bitmapset *modifiedCols); + List *recheckIndexes, Bitmapset *modifiedCols, + TransitionCaptureState *transition_capture); static void AfterTriggerEnlargeQueryState(void); @@ -354,13 +356,6 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString, * adjustments will be needed below. */ - if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) - ereport(ERROR, - (errcode(ERRCODE_WRONG_OBJECT_TYPE), - errmsg("\"%s\" is a partitioned table", - RelationGetRelationName(rel)), - errdetail("Triggers on partitioned tables cannot have transition tables."))); - if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), @@ -375,6 +370,27 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString, RelationGetRelationName(rel)), errdetail("Triggers on views cannot have transition tables."))); + /* + * We currently don't allow row-level triggers with transition + * tables on partition or inheritance children. Such triggers + * would somehow need to see tuples converted to the format of the + * table they're attached to, and it's not clear which subset of + * tuples each child should see. See also the prohibitions in + * ATExecAttachPartition() and ATExecAddInherit(). + */ + if (TRIGGER_FOR_ROW(tgtype) && has_superclass(rel->rd_id)) + { + /* Use appropriate error message. */ + if (rel->rd_rel->relispartition) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("ROW triggers with transition tables are not supported on partitions"))); + else + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("ROW triggers with transition tables are not supported on inheritance children"))); + } + if (stmt->timing != TRIGGER_TYPE_AFTER) ereport(ERROR, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), @@ -2028,6 +2044,64 @@ equalTriggerDescs(TriggerDesc *trigdesc1, TriggerDesc *trigdesc2) } #endif /* NOT_USED */ +/* + * Check if there is a row-level trigger with transition tables that prevents + * a table from becoming an inheritance child or partition. Return the name + * of the first such incompatible trigger, or NULL if there is none. + */ +const char * +FindTriggerIncompatibleWithInheritance(TriggerDesc *trigdesc) +{ + if (trigdesc != NULL) + { + int i; + + for (i = 0; i < trigdesc->numtriggers; ++i) + { + Trigger *trigger = &trigdesc->triggers[i]; + + if (trigger->tgoldtable != NULL || trigger->tgnewtable != NULL) + return trigger->tgname; + } + } + + return NULL; +} + +/* + * Make a TransitionCaptureState object from a given TriggerDesc. The + * resulting object holds the flags which control whether transition tuples + * are collected when tables are modified. This allows us to use the flags + * from a parent table to control the collection of transition tuples from + * child tables. + * + * If there are no triggers with transition tables configured for 'trigdesc', + * then return NULL. + * + * The resulting object can be passed to the ExecAR* functions. The caller + * should set tcs_map or tcs_original_insert_tuple as appropriate when dealing + * with child tables. + */ +TransitionCaptureState * +MakeTransitionCaptureState(TriggerDesc *trigdesc) +{ + TransitionCaptureState *state = NULL; + + if (trigdesc != NULL && + (trigdesc->trig_delete_old_table || trigdesc->trig_update_old_table || + trigdesc->trig_update_new_table || trigdesc->trig_insert_new_table)) + { + state = (TransitionCaptureState *) + palloc0(sizeof(TransitionCaptureState)); + state->tcs_delete_old_table = trigdesc->trig_delete_old_table; + state->tcs_update_old_table = trigdesc->trig_update_old_table; + state->tcs_update_new_table = trigdesc->trig_update_new_table; + state->tcs_insert_new_table = trigdesc->trig_insert_new_table; + } + + return state; +} + /* * Call a trigger function. * @@ -2192,7 +2266,7 @@ ExecASInsertTriggers(EState *estate, ResultRelInfo *relinfo) if (trigdesc && trigdesc->trig_insert_after_statement) AfterTriggerSaveEvent(estate, relinfo, TRIGGER_EVENT_INSERT, - false, NULL, NULL, NIL, NULL); + false, NULL, NULL, NIL, NULL, NULL); } TupleTableSlot * @@ -2263,14 +2337,18 @@ ExecBRInsertTriggers(EState *estate, ResultRelInfo *relinfo, void ExecARInsertTriggers(EState *estate, ResultRelInfo *relinfo, - HeapTuple trigtuple, List *recheckIndexes) + HeapTuple trigtuple, List *recheckIndexes, + TransitionCaptureState *transition_capture) { TriggerDesc *trigdesc = relinfo->ri_TrigDesc; - if (trigdesc && - (trigdesc->trig_insert_after_row || trigdesc->trig_insert_new_table)) + if ((trigdesc && trigdesc->trig_insert_after_row) || + (trigdesc && !transition_capture && trigdesc->trig_insert_new_table) || + (transition_capture && transition_capture->tcs_insert_new_table)) AfterTriggerSaveEvent(estate, relinfo, TRIGGER_EVENT_INSERT, - true, NULL, trigtuple, recheckIndexes, NULL); + true, NULL, trigtuple, + recheckIndexes, NULL, + transition_capture); } TupleTableSlot * @@ -2398,7 +2476,7 @@ ExecASDeleteTriggers(EState *estate, ResultRelInfo *relinfo) if (trigdesc && trigdesc->trig_delete_after_statement) AfterTriggerSaveEvent(estate, relinfo, TRIGGER_EVENT_DELETE, - false, NULL, NULL, NIL, NULL); + false, NULL, NULL, NIL, NULL, NULL); } bool @@ -2473,12 +2551,14 @@ ExecBRDeleteTriggers(EState *estate, EPQState *epqstate, void ExecARDeleteTriggers(EState *estate, ResultRelInfo *relinfo, ItemPointer tupleid, - HeapTuple fdw_trigtuple) + HeapTuple fdw_trigtuple, + TransitionCaptureState *transition_capture) { TriggerDesc *trigdesc = relinfo->ri_TrigDesc; - if (trigdesc && - (trigdesc->trig_delete_after_row || trigdesc->trig_delete_old_table)) + if ((trigdesc && trigdesc->trig_delete_after_row) || + (trigdesc && !transition_capture && trigdesc->trig_delete_old_table) || + (transition_capture && transition_capture->tcs_delete_old_table)) { HeapTuple trigtuple; @@ -2494,7 +2574,8 @@ ExecARDeleteTriggers(EState *estate, ResultRelInfo *relinfo, trigtuple = fdw_trigtuple; AfterTriggerSaveEvent(estate, relinfo, TRIGGER_EVENT_DELETE, - true, trigtuple, NULL, NIL, NULL); + true, trigtuple, NULL, NIL, NULL, + transition_capture); if (trigtuple != fdw_trigtuple) heap_freetuple(trigtuple); } @@ -2610,7 +2691,8 @@ ExecASUpdateTriggers(EState *estate, ResultRelInfo *relinfo) if (trigdesc && trigdesc->trig_update_after_statement) AfterTriggerSaveEvent(estate, relinfo, TRIGGER_EVENT_UPDATE, false, NULL, NULL, NIL, - GetUpdatedColumns(relinfo, estate)); + GetUpdatedColumns(relinfo, estate), + NULL); } TupleTableSlot * @@ -2735,12 +2817,18 @@ ExecARUpdateTriggers(EState *estate, ResultRelInfo *relinfo, ItemPointer tupleid, HeapTuple fdw_trigtuple, HeapTuple newtuple, - List *recheckIndexes) + List *recheckIndexes, + TransitionCaptureState *transition_capture) { TriggerDesc *trigdesc = relinfo->ri_TrigDesc; - if (trigdesc && (trigdesc->trig_update_after_row || - trigdesc->trig_update_old_table || trigdesc->trig_update_new_table)) + if ((trigdesc && trigdesc->trig_update_after_row) || + (trigdesc && !transition_capture && + (trigdesc->trig_update_old_table || + trigdesc->trig_update_new_table)) || + (transition_capture && + (transition_capture->tcs_update_old_table || + transition_capture->tcs_update_new_table))) { HeapTuple trigtuple; @@ -2757,7 +2845,8 @@ ExecARUpdateTriggers(EState *estate, ResultRelInfo *relinfo, AfterTriggerSaveEvent(estate, relinfo, TRIGGER_EVENT_UPDATE, true, trigtuple, newtuple, recheckIndexes, - GetUpdatedColumns(relinfo, estate)); + GetUpdatedColumns(relinfo, estate), + transition_capture); if (trigtuple != fdw_trigtuple) heap_freetuple(trigtuple); } @@ -2888,7 +2977,7 @@ ExecASTruncateTriggers(EState *estate, ResultRelInfo *relinfo) if (trigdesc && trigdesc->trig_truncate_after_statement) AfterTriggerSaveEvent(estate, relinfo, TRIGGER_EVENT_TRUNCATE, - false, NULL, NULL, NIL, NULL); + false, NULL, NULL, NIL, NULL, NULL); } @@ -5090,7 +5179,8 @@ static void AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, int event, bool row_trigger, HeapTuple oldtup, HeapTuple newtup, - List *recheckIndexes, Bitmapset *modifiedCols) + List *recheckIndexes, Bitmapset *modifiedCols, + TransitionCaptureState *transition_capture) { Relation rel = relinfo->ri_RelationDesc; TriggerDesc *trigdesc = relinfo->ri_TrigDesc; @@ -5120,10 +5210,49 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, */ if (row_trigger) { - if ((event == TRIGGER_EVENT_DELETE && - trigdesc->trig_delete_old_table) || - (event == TRIGGER_EVENT_UPDATE && - trigdesc->trig_update_old_table)) + HeapTuple original_insert_tuple = NULL; + TupleConversionMap *map = NULL; + bool delete_old_table = false; + bool update_old_table = false; + bool update_new_table = false; + bool insert_new_table = false; + + if (transition_capture != NULL) + { + /* + * A TransitionCaptureState object was provided to tell us which + * tuples to capture based on a parent table named in a DML + * statement. We may be dealing with a child table with an + * incompatible TupleDescriptor, in which case we'll need a map to + * convert them. As a small optimization, we may receive the + * original tuple from an insertion into a partitioned table to + * avoid a wasteful parent->child->parent round trip. + */ + delete_old_table = transition_capture->tcs_delete_old_table; + update_old_table = transition_capture->tcs_update_old_table; + update_new_table = transition_capture->tcs_update_new_table; + insert_new_table = transition_capture->tcs_insert_new_table; + map = transition_capture->tcs_map; + original_insert_tuple = + transition_capture->tcs_original_insert_tuple; + } + else if (trigdesc != NULL) + { + /* + * Check if we need to capture transition tuples for triggers + * defined on this relation directly. This case is useful for + * cases like execReplication.c which don't set up a + * TriggerCaptureState because they don't know how to work with + * partitions. + */ + delete_old_table = trigdesc->trig_delete_old_table; + update_old_table = trigdesc->trig_update_old_table; + update_new_table = trigdesc->trig_update_new_table; + insert_new_table = trigdesc->trig_insert_new_table; + } + + if ((event == TRIGGER_EVENT_DELETE && delete_old_table) || + (event == TRIGGER_EVENT_UPDATE && update_old_table)) { Tuplestorestate *old_tuplestore; @@ -5131,12 +5260,18 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, old_tuplestore = GetTriggerTransitionTuplestore (afterTriggers.old_tuplestores); - tuplestore_puttuple(old_tuplestore, oldtup); + if (map != NULL) + { + HeapTuple converted = do_convert_tuple(oldtup, map); + + tuplestore_puttuple(old_tuplestore, converted); + pfree(converted); + } + else + tuplestore_puttuple(old_tuplestore, oldtup); } - if ((event == TRIGGER_EVENT_INSERT && - trigdesc->trig_insert_new_table) || - (event == TRIGGER_EVENT_UPDATE && - trigdesc->trig_update_new_table)) + if ((event == TRIGGER_EVENT_INSERT && insert_new_table) || + (event == TRIGGER_EVENT_UPDATE && update_new_table)) { Tuplestorestate *new_tuplestore; @@ -5144,11 +5279,22 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, new_tuplestore = GetTriggerTransitionTuplestore (afterTriggers.new_tuplestores); - tuplestore_puttuple(new_tuplestore, newtup); + if (original_insert_tuple != NULL) + tuplestore_puttuple(new_tuplestore, original_insert_tuple); + else if (map != NULL) + { + HeapTuple converted = do_convert_tuple(newtup, map); + + tuplestore_puttuple(new_tuplestore, converted); + pfree(converted); + } + else + tuplestore_puttuple(new_tuplestore, newtup); } /* If transition tables are the only reason we're here, return. */ - if ((event == TRIGGER_EVENT_DELETE && !trigdesc->trig_delete_after_row) || + if (trigdesc == NULL || + (event == TRIGGER_EVENT_DELETE && !trigdesc->trig_delete_after_row) || (event == TRIGGER_EVENT_INSERT && !trigdesc->trig_insert_after_row) || (event == TRIGGER_EVENT_UPDATE && !trigdesc->trig_update_after_row)) return; diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 7f0d21f516..0f08283f81 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -3198,7 +3198,7 @@ EvalPlanQualEnd(EPQState *epqstate) * 'tup_conv_maps' receives an array of TupleConversionMap objects with one * entry for every leaf partition (required to convert input tuple based * on the root table's rowtype to a leaf partition's rowtype after tuple - * routing is done + * routing is done) * 'partition_tuple_slot' receives a standalone TupleTableSlot to be used * to manipulate any given leaf partition's rowtype after that partition * is chosen by tuple-routing. diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c index 59f14e997f..36960eaa7e 100644 --- a/src/backend/executor/execReplication.c +++ b/src/backend/executor/execReplication.c @@ -417,7 +417,7 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot) /* AFTER ROW INSERT Triggers */ ExecARInsertTriggers(estate, resultRelInfo, tuple, - recheckIndexes); + recheckIndexes, NULL); list_free(recheckIndexes); } @@ -479,7 +479,7 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate, /* AFTER ROW UPDATE Triggers */ ExecARUpdateTriggers(estate, resultRelInfo, &searchslot->tts_tuple->t_self, - NULL, tuple, recheckIndexes); + NULL, tuple, recheckIndexes, NULL); list_free(recheckIndexes); } @@ -522,7 +522,7 @@ ExecSimpleRelationDelete(EState *estate, EPQState *epqstate, /* AFTER ROW DELETE Triggers */ ExecARDeleteTriggers(estate, resultRelInfo, - &searchslot->tts_tuple->t_self, NULL); + &searchslot->tts_tuple->t_self, NULL, NULL); list_free(recheckIndexes); } diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 5e43a06942..f2534f2062 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -313,6 +313,36 @@ ExecInsert(ModifyTableState *mtstate, /* For ExecInsertIndexTuples() to work on the partition's indexes */ estate->es_result_relation_info = resultRelInfo; + /* + * If we're capturing transition tuples, we might need to convert from + * the partition rowtype to parent rowtype. + */ + if (mtstate->mt_transition_capture != NULL) + { + if (resultRelInfo->ri_TrigDesc && + (resultRelInfo->ri_TrigDesc->trig_insert_before_row || + resultRelInfo->ri_TrigDesc->trig_insert_instead_row)) + { + /* + * If there are any BEFORE or INSTEAD triggers on the + * partition, we'll have to be ready to convert their result + * back to tuplestore format. + */ + mtstate->mt_transition_capture->tcs_original_insert_tuple = NULL; + mtstate->mt_transition_capture->tcs_map = + mtstate->mt_transition_tupconv_maps[leaf_part_index]; + } + else + { + /* + * Otherwise, just remember the original unconverted tuple, to + * avoid a needless round trip conversion. + */ + mtstate->mt_transition_capture->tcs_original_insert_tuple = tuple; + mtstate->mt_transition_capture->tcs_map = NULL; + } + } + /* * We might need to convert from the parent rowtype to the partition * rowtype. @@ -588,7 +618,8 @@ ExecInsert(ModifyTableState *mtstate, } /* AFTER ROW INSERT Triggers */ - ExecARInsertTriggers(estate, resultRelInfo, tuple, recheckIndexes); + ExecARInsertTriggers(estate, resultRelInfo, tuple, recheckIndexes, + mtstate->mt_transition_capture); list_free(recheckIndexes); @@ -636,7 +667,8 @@ ExecInsert(ModifyTableState *mtstate, * ---------------------------------------------------------------- */ static TupleTableSlot * -ExecDelete(ItemPointer tupleid, +ExecDelete(ModifyTableState *mtstate, + ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *planSlot, EPQState *epqstate, @@ -813,7 +845,8 @@ ldelete:; (estate->es_processed)++; /* AFTER ROW DELETE Triggers */ - ExecARDeleteTriggers(estate, resultRelInfo, tupleid, oldtuple); + ExecARDeleteTriggers(estate, resultRelInfo, tupleid, oldtuple, + mtstate->mt_transition_capture); /* Process RETURNING if present */ if (resultRelInfo->ri_projectReturning) @@ -894,7 +927,8 @@ ldelete:; * ---------------------------------------------------------------- */ static TupleTableSlot * -ExecUpdate(ItemPointer tupleid, +ExecUpdate(ModifyTableState *mtstate, + ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *slot, TupleTableSlot *planSlot, @@ -1122,7 +1156,8 @@ lreplace:; /* AFTER ROW UPDATE Triggers */ ExecARUpdateTriggers(estate, resultRelInfo, tupleid, oldtuple, tuple, - recheckIndexes); + recheckIndexes, + mtstate->mt_transition_capture); list_free(recheckIndexes); @@ -1329,7 +1364,7 @@ ExecOnConflictUpdate(ModifyTableState *mtstate, */ /* Execute UPDATE with projection */ - *returning = ExecUpdate(&tuple.t_self, NULL, + *returning = ExecUpdate(mtstate, &tuple.t_self, NULL, mtstate->mt_conflproj, planSlot, &mtstate->mt_epqstate, mtstate->ps.state, canSetTag); @@ -1376,20 +1411,31 @@ fireBSTriggers(ModifyTableState *node) } /* - * Process AFTER EACH STATEMENT triggers + * Return the ResultRelInfo for which we will fire AFTER STATEMENT triggers. + * This is also the relation into whose tuple format all captured transition + * tuples must be converted. */ -static void -fireASTriggers(ModifyTableState *node) +static ResultRelInfo * +getASTriggerResultRelInfo(ModifyTableState *node) { - ResultRelInfo *resultRelInfo = node->resultRelInfo; - /* * If the node modifies a partitioned table, we must fire its triggers. * Note that in that case, node->resultRelInfo points to the first leaf * partition, not the root table. */ if (node->rootResultRelInfo != NULL) - resultRelInfo = node->rootResultRelInfo; + return node->rootResultRelInfo; + else + return node->resultRelInfo; +} + +/* + * Process AFTER EACH STATEMENT triggers + */ +static void +fireASTriggers(ModifyTableState *node) +{ + ResultRelInfo *resultRelInfo = getASTriggerResultRelInfo(node); switch (node->operation) { @@ -1411,6 +1457,72 @@ fireASTriggers(ModifyTableState *node) } } +/* + * Set up the state needed for collecting transition tuples for AFTER + * triggers. + */ +static void +ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate) +{ + ResultRelInfo *targetRelInfo = getASTriggerResultRelInfo(mtstate); + int i; + + /* Check for transition tables on the directly targeted relation. */ + mtstate->mt_transition_capture = + MakeTransitionCaptureState(targetRelInfo->ri_TrigDesc); + + /* + * If we found that we need to collect transition tuples then we may also + * need tuple conversion maps for any children that have TupleDescs that + * aren't compatible with the tuplestores. + */ + if (mtstate->mt_transition_capture != NULL) + { + ResultRelInfo *resultRelInfos; + int numResultRelInfos; + + /* Find the set of partitions so that we can find their TupleDescs. */ + if (mtstate->mt_partition_dispatch_info != NULL) + { + /* + * For INSERT via partitioned table, so we need TupleDescs based + * on the partition routing table. + */ + resultRelInfos = mtstate->mt_partitions; + numResultRelInfos = mtstate->mt_num_partitions; + } + else + { + /* Otherwise we need the ResultRelInfo for each subplan. */ + resultRelInfos = mtstate->resultRelInfo; + numResultRelInfos = mtstate->mt_nplans; + } + + /* + * Build array of conversion maps from each child's TupleDesc to the + * one used in the tuplestore. The map pointers may be NULL when no + * conversion is necessary, which is hopefully a common case for + * partitions. + */ + mtstate->mt_transition_tupconv_maps = (TupleConversionMap **) + palloc0(sizeof(TupleConversionMap *) * numResultRelInfos); + for (i = 0; i < numResultRelInfos; ++i) + { + mtstate->mt_transition_tupconv_maps[i] = + convert_tuples_by_name(RelationGetDescr(resultRelInfos[i].ri_RelationDesc), + RelationGetDescr(targetRelInfo->ri_RelationDesc), + gettext_noop("could not convert row type")); + } + + /* + * Install the conversion map for the first plan for UPDATE and DELETE + * operations. It will be advanced each time we switch to the next + * plan. (INSERT operations set it every time.) + */ + mtstate->mt_transition_capture->tcs_map = + mtstate->mt_transition_tupconv_maps[0]; + } +} /* ---------------------------------------------------------------- * ExecModifyTable @@ -1509,6 +1621,13 @@ ExecModifyTable(ModifyTableState *node) estate->es_result_relation_info = resultRelInfo; EvalPlanQualSetPlan(&node->mt_epqstate, subplanstate->plan, node->mt_arowmarks[node->mt_whichplan]); + if (node->mt_transition_capture != NULL) + { + /* Prepare to convert transition tuples from this child. */ + Assert(node->mt_transition_tupconv_maps != NULL); + node->mt_transition_capture->tcs_map = + node->mt_transition_tupconv_maps[node->mt_whichplan]; + } continue; } else @@ -1618,11 +1737,11 @@ ExecModifyTable(ModifyTableState *node) estate, node->canSetTag); break; case CMD_UPDATE: - slot = ExecUpdate(tupleid, oldtuple, slot, planSlot, + slot = ExecUpdate(node, tupleid, oldtuple, slot, planSlot, &node->mt_epqstate, estate, node->canSetTag); break; case CMD_DELETE: - slot = ExecDelete(tupleid, oldtuple, planSlot, + slot = ExecDelete(node, tupleid, oldtuple, planSlot, &node->mt_epqstate, estate, node->canSetTag); break; default: @@ -1804,6 +1923,9 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) mtstate->mt_partition_tuple_slot = partition_tuple_slot; } + /* Build state for collecting transition tuples */ + ExecSetupTransitionCaptureState(mtstate, estate); + /* * Initialize any WITH CHECK OPTION constraints if needed. */ diff --git a/src/include/catalog/pg_inherits_fn.h b/src/include/catalog/pg_inherits_fn.h index abfa4766a1..7743388899 100644 --- a/src/include/catalog/pg_inherits_fn.h +++ b/src/include/catalog/pg_inherits_fn.h @@ -21,6 +21,7 @@ extern List *find_inheritance_children(Oid parentrelId, LOCKMODE lockmode); extern List *find_all_inheritors(Oid parentrelId, LOCKMODE lockmode, List **parents); extern bool has_subclass(Oid relationId); +extern bool has_superclass(Oid relationId); extern bool typeInheritsFrom(Oid subclassTypeId, Oid superclassTypeId); #endif /* PG_INHERITS_FN_H */ diff --git a/src/include/commands/trigger.h b/src/include/commands/trigger.h index 10ac724feb..51a25c8ddc 100644 --- a/src/include/commands/trigger.h +++ b/src/include/commands/trigger.h @@ -41,6 +41,39 @@ typedef struct TriggerData Tuplestorestate *tg_newtable; } TriggerData; +/* + * Meta-data to control the capture of old and new tuples into transition + * tables from child tables. + */ +typedef struct TransitionCaptureState +{ + /* + * Is there at least one trigger specifying each transition relation on + * the relation explicitly named in the DML statement or COPY command? + */ + bool tcs_delete_old_table; + bool tcs_update_old_table; + bool tcs_update_new_table; + bool tcs_insert_new_table; + + /* + * For UPDATE and DELETE, AfterTriggerSaveEvent may need to convert the + * new and old tuples from a child table's format to the format of the + * relation named in a query so that it is compatible with the transition + * tuplestores. + */ + TupleConversionMap *tcs_map; + + /* + * For INSERT and COPY, it would be wasteful to convert tuples from child + * format to parent format after they have already been converted in the + * opposite direction during routing. In that case we bypass conversion + * and allow the inserting code (copy.c and nodeModifyTable.c) to provide + * the original tuple directly. + */ + HeapTuple tcs_original_insert_tuple; +} TransitionCaptureState; + /* * TriggerEvent bit flags * @@ -127,6 +160,9 @@ extern void RelationBuildTriggers(Relation relation); extern TriggerDesc *CopyTriggerDesc(TriggerDesc *trigdesc); +extern const char *FindTriggerIncompatibleWithInheritance(TriggerDesc *trigdesc); +extern TransitionCaptureState *MakeTransitionCaptureState(TriggerDesc *trigdesc); + extern void FreeTriggerDesc(TriggerDesc *trigdesc); extern void ExecBSInsertTriggers(EState *estate, @@ -139,7 +175,8 @@ extern TupleTableSlot *ExecBRInsertTriggers(EState *estate, extern void ExecARInsertTriggers(EState *estate, ResultRelInfo *relinfo, HeapTuple trigtuple, - List *recheckIndexes); + List *recheckIndexes, + TransitionCaptureState *transition_capture); extern TupleTableSlot *ExecIRInsertTriggers(EState *estate, ResultRelInfo *relinfo, TupleTableSlot *slot); @@ -155,7 +192,8 @@ extern bool ExecBRDeleteTriggers(EState *estate, extern void ExecARDeleteTriggers(EState *estate, ResultRelInfo *relinfo, ItemPointer tupleid, - HeapTuple fdw_trigtuple); + HeapTuple fdw_trigtuple, + TransitionCaptureState *transition_capture); extern bool ExecIRDeleteTriggers(EState *estate, ResultRelInfo *relinfo, HeapTuple trigtuple); @@ -174,7 +212,8 @@ extern void ExecARUpdateTriggers(EState *estate, ItemPointer tupleid, HeapTuple fdw_trigtuple, HeapTuple newtuple, - List *recheckIndexes); + List *recheckIndexes, + TransitionCaptureState *transition_capture); extern TupleTableSlot *ExecIRUpdateTriggers(EState *estate, ResultRelInfo *relinfo, HeapTuple trigtuple, diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 54c5cf5f95..85fac8ab91 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -963,6 +963,10 @@ typedef struct ModifyTableState TupleConversionMap **mt_partition_tupconv_maps; /* Per partition tuple conversion map */ TupleTableSlot *mt_partition_tuple_slot; + struct TransitionCaptureState *mt_transition_capture; + /* controls transition table population */ + TupleConversionMap **mt_transition_tupconv_maps; + /* Per plan/partition tuple conversion */ } ModifyTableState; /* ---------------- diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out index 29b8adf1e2..995410f1aa 100644 --- a/src/test/regress/expected/triggers.out +++ b/src/test/regress/expected/triggers.out @@ -1793,31 +1793,6 @@ drop table upsert; drop function upsert_before_func(); drop function upsert_after_func(); -- --- Verify that triggers are prevented on partitioned tables if they would --- access row data (ROW and STATEMENT-with-transition-table) --- -create table my_table (i int) partition by list (i); -create table my_table_42 partition of my_table for values in (42); -create function my_trigger_function() returns trigger as $$ begin end; $$ language plpgsql; -create trigger my_trigger before update on my_table for each row execute procedure my_trigger_function(); -ERROR: "my_table" is a partitioned table -DETAIL: Partitioned tables cannot have ROW triggers. -create trigger my_trigger after update on my_table referencing old table as old_table - for each statement execute procedure my_trigger_function(); -ERROR: "my_table" is a partitioned table -DETAIL: Triggers on partitioned tables cannot have transition tables. --- --- Verify that triggers are allowed on partitions --- -create trigger my_trigger before update on my_table_42 for each row execute procedure my_trigger_function(); -drop trigger my_trigger on my_table_42; -create trigger my_trigger after update on my_table_42 referencing old table as old_table - for each statement execute procedure my_trigger_function(); -drop trigger my_trigger on my_table_42; -drop function my_trigger_function(); -drop table my_table_42; -drop table my_table; --- -- Verify that triggers with transition tables are not allowed on -- views -- @@ -1922,3 +1897,304 @@ copy parted_stmt_trig1(a) from stdin; NOTICE: trigger on parted_stmt_trig1 BEFORE INSERT for ROW NOTICE: trigger on parted_stmt_trig1 AFTER INSERT for ROW drop table parted_stmt_trig, parted2_stmt_trig; +-- +-- Test the interaction between transition tables and both kinds of +-- inheritance. We'll dump the contents of the transition tables in a +-- format that shows the attribute order, so that we can distinguish +-- tuple formats (though not dropped attributes). +-- +create or replace function dump_insert() returns trigger language plpgsql as +$$ + begin + raise notice 'trigger = %, new table = %', + TG_NAME, + (select string_agg(new_table::text, ', ' order by a) from new_table); + return null; + end; +$$; +create or replace function dump_update() returns trigger language plpgsql as +$$ + begin + raise notice 'trigger = %, old table = %, new table = %', + TG_NAME, + (select string_agg(old_table::text, ', ' order by a) from old_table), + (select string_agg(new_table::text, ', ' order by a) from new_table); + return null; + end; +$$; +create or replace function dump_delete() returns trigger language plpgsql as +$$ + begin + raise notice 'trigger = %, old table = %', + TG_NAME, + (select string_agg(old_table::text, ', ' order by a) from old_table); + return null; + end; +$$; +-- +-- Verify behavior of statement triggers on partition hierarchy with +-- transition tables. Tuples should appear to each trigger in the +-- format of the the relation the trigger is attached to. +-- +-- set up a partition hierarchy with some different TupleDescriptors +create table parent (a text, b int) partition by list (a); +-- a child matching parent +create table child1 partition of parent for values in ('AAA'); +-- a child with a dropped column +create table child2 (x int, a text, b int); +alter table child2 drop column x; +alter table parent attach partition child2 for values in ('BBB'); +-- a child with a different column order +create table child3 (b int, a text); +alter table parent attach partition child3 for values in ('CCC'); +create trigger parent_insert_trig + after insert on parent referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger parent_update_trig + after update on parent referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger parent_delete_trig + after delete on parent referencing old table as old_table + for each statement execute procedure dump_delete(); +create trigger child1_insert_trig + after insert on child1 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child1_update_trig + after update on child1 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child1_delete_trig + after delete on child1 referencing old table as old_table + for each statement execute procedure dump_delete(); +create trigger child2_insert_trig + after insert on child2 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child2_update_trig + after update on child2 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child2_delete_trig + after delete on child2 referencing old table as old_table + for each statement execute procedure dump_delete(); +create trigger child3_insert_trig + after insert on child3 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child3_update_trig + after update on child3 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child3_delete_trig + after delete on child3 referencing old table as old_table + for each statement execute procedure dump_delete(); +-- insert directly into children sees respective child-format tuples +insert into child1 values ('AAA', 42); +NOTICE: trigger = child1_insert_trig, new table = (AAA,42) +insert into child2 values ('BBB', 42); +NOTICE: trigger = child2_insert_trig, new table = (BBB,42) +insert into child3 values (42, 'CCC'); +NOTICE: trigger = child3_insert_trig, new table = (42,CCC) +-- update via parent sees parent-format tuples +update parent set b = b + 1; +NOTICE: trigger = parent_update_trig, old table = (AAA,42), (BBB,42), (CCC,42), new table = (AAA,43), (BBB,43), (CCC,43) +-- delete via parent sees parent-format tuples +delete from parent; +NOTICE: trigger = parent_delete_trig, old table = (AAA,43), (BBB,43), (CCC,43) +-- insert into parent sees parent-format tuples +insert into parent values ('AAA', 42); +NOTICE: trigger = parent_insert_trig, new table = (AAA,42) +insert into parent values ('BBB', 42); +NOTICE: trigger = parent_insert_trig, new table = (BBB,42) +insert into parent values ('CCC', 42); +NOTICE: trigger = parent_insert_trig, new table = (CCC,42) +-- delete from children sees respective child-format tuples +delete from child1; +NOTICE: trigger = child1_delete_trig, old table = (AAA,42) +delete from child2; +NOTICE: trigger = child2_delete_trig, old table = (BBB,42) +delete from child3; +NOTICE: trigger = child3_delete_trig, old table = (42,CCC) +-- copy into parent sees parent-format tuples +copy parent (a, b) from stdin; +NOTICE: trigger = parent_insert_trig, new table = (AAA,42), (BBB,42), (CCC,42) +-- DML affecting parent sees tuples collected from children even if +-- there is no transition table trigger on the children +drop trigger child1_insert_trig on child1; +drop trigger child1_update_trig on child1; +drop trigger child1_delete_trig on child1; +drop trigger child2_insert_trig on child2; +drop trigger child2_update_trig on child2; +drop trigger child2_delete_trig on child2; +drop trigger child3_insert_trig on child3; +drop trigger child3_update_trig on child3; +drop trigger child3_delete_trig on child3; +delete from parent; +NOTICE: trigger = parent_delete_trig, old table = (AAA,42), (BBB,42), (CCC,42) +-- copy into parent sees tuples collected from children even if there +-- is no transition-table trigger on the children +copy parent (a, b) from stdin; +NOTICE: trigger = parent_insert_trig, new table = (AAA,42), (BBB,42), (CCC,42) +-- insert into parent with a before trigger on a child tuple before +-- insertion, and we capture the newly modified row in parent format +create or replace function intercept_insert() returns trigger language plpgsql as +$$ + begin + new.b = new.b + 1000; + return new; + end; +$$; +create trigger intercept_insert_child3 + before insert on child3 + for each row execute procedure intercept_insert(); +-- insert, parent trigger sees post-modification parent-format tuple +insert into parent values ('AAA', 42), ('BBB', 42), ('CCC', 66); +NOTICE: trigger = parent_insert_trig, new table = (AAA,42), (BBB,42), (CCC,1066) +-- copy, parent trigger sees post-modification parent-format tuple +copy parent (a, b) from stdin; +NOTICE: trigger = parent_insert_trig, new table = (AAA,42), (BBB,42), (CCC,1234) +drop table child1, child2, child3, parent; +drop function intercept_insert(); +-- +-- Verify prohibition of row triggers with transition triggers on +-- partitions +-- +create table parent (a text, b int) partition by list (a); +create table child partition of parent for values in ('AAA'); +-- adding row trigger with transition table fails +create trigger child_row_trig + after insert on child referencing new table as new_table + for each row execute procedure dump_insert(); +ERROR: ROW triggers with transition tables are not supported on partitions +-- detaching it first works +alter table parent detach partition child; +create trigger child_row_trig + after insert on child referencing new table as new_table + for each row execute procedure dump_insert(); +-- but now we're not allowed to reattach it +alter table parent attach partition child for values in ('AAA'); +ERROR: trigger "child_row_trig" prevents table "child" from becoming a partition +DETAIL: ROW triggers with transition tables are not supported on partitions +-- drop the trigger, and now we're allowed to attach it again +drop trigger child_row_trig on child; +alter table parent attach partition child for values in ('AAA'); +drop table child, parent; +-- +-- Verify behavior of statement triggers on (non-partition) +-- inheritance hierarchy with transition tables; similar to the +-- partition case, except there is no rerouting on insertion and child +-- tables can have extra columns +-- +-- set up inheritance hierarchy with different TupleDescriptors +create table parent (a text, b int); +-- a child matching parent +create table child1 () inherits (parent); +-- a child with a different column order +create table child2 (b int, a text); +alter table child2 inherit parent; +-- a child with an extra column +create table child3 (c text) inherits (parent); +create trigger parent_insert_trig + after insert on parent referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger parent_update_trig + after update on parent referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger parent_delete_trig + after delete on parent referencing old table as old_table + for each statement execute procedure dump_delete(); +create trigger child1_insert_trig + after insert on child1 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child1_update_trig + after update on child1 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child1_delete_trig + after delete on child1 referencing old table as old_table + for each statement execute procedure dump_delete(); +create trigger child2_insert_trig + after insert on child2 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child2_update_trig + after update on child2 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child2_delete_trig + after delete on child2 referencing old table as old_table + for each statement execute procedure dump_delete(); +create trigger child3_insert_trig + after insert on child3 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child3_update_trig + after update on child3 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child3_delete_trig + after delete on child3 referencing old table as old_table + for each statement execute procedure dump_delete(); +-- insert directly into children sees respective child-format tuples +insert into child1 values ('AAA', 42); +NOTICE: trigger = child1_insert_trig, new table = (AAA,42) +insert into child2 values (42, 'BBB'); +NOTICE: trigger = child2_insert_trig, new table = (42,BBB) +insert into child3 values ('CCC', 42, 'foo'); +NOTICE: trigger = child3_insert_trig, new table = (CCC,42,foo) +-- update via parent sees parent-format tuples +update parent set b = b + 1; +NOTICE: trigger = parent_update_trig, old table = (AAA,42), (BBB,42), (CCC,42), new table = (AAA,43), (BBB,43), (CCC,43) +-- delete via parent sees parent-format tuples +delete from parent; +NOTICE: trigger = parent_delete_trig, old table = (AAA,43), (BBB,43), (CCC,43) +-- reinsert values into children for next test... +insert into child1 values ('AAA', 42); +NOTICE: trigger = child1_insert_trig, new table = (AAA,42) +insert into child2 values (42, 'BBB'); +NOTICE: trigger = child2_insert_trig, new table = (42,BBB) +insert into child3 values ('CCC', 42, 'foo'); +NOTICE: trigger = child3_insert_trig, new table = (CCC,42,foo) +-- delete from children sees respective child-format tuples +delete from child1; +NOTICE: trigger = child1_delete_trig, old table = (AAA,42) +delete from child2; +NOTICE: trigger = child2_delete_trig, old table = (42,BBB) +delete from child3; +NOTICE: trigger = child3_delete_trig, old table = (CCC,42,foo) +-- copy into parent sees parent-format tuples (no rerouting, so these +-- are really inserted into the parent) +copy parent (a, b) from stdin; +NOTICE: trigger = parent_insert_trig, new table = (AAA,42), (BBB,42), (CCC,42) +-- DML affecting parent sees tuples collected from children even if +-- there is no transition table trigger on the children +drop trigger child1_insert_trig on child1; +drop trigger child1_update_trig on child1; +drop trigger child1_delete_trig on child1; +drop trigger child2_insert_trig on child2; +drop trigger child2_update_trig on child2; +drop trigger child2_delete_trig on child2; +drop trigger child3_insert_trig on child3; +drop trigger child3_update_trig on child3; +drop trigger child3_delete_trig on child3; +delete from parent; +NOTICE: trigger = parent_delete_trig, old table = (AAA,42), (BBB,42), (CCC,42) +drop table child1, child2, child3, parent; +-- +-- Verify prohibition of row triggers with transition triggers on +-- inheritance children +-- +create table parent (a text, b int); +create table child () inherits (parent); +-- adding row trigger with transition table fails +create trigger child_row_trig + after insert on child referencing new table as new_table + for each row execute procedure dump_insert(); +ERROR: ROW triggers with transition tables are not supported on inheritance children +-- disinheriting it first works +alter table child no inherit parent; +create trigger child_row_trig + after insert on child referencing new table as new_table + for each row execute procedure dump_insert(); +-- but now we're not allowed to make it inherit anymore +alter table child inherit parent; +ERROR: trigger "child_row_trig" prevents table "child" from becoming an inheritance child +DETAIL: ROW triggers with transition tables are not supported in inheritance hierarchies +-- drop the trigger, and now we're allowed to make it inherit again +drop trigger child_row_trig on child; +alter table child inherit parent; +drop table child, parent; +-- cleanup +drop function dump_insert(); +drop function dump_update(); +drop function dump_delete(); diff --git a/src/test/regress/sql/triggers.sql b/src/test/regress/sql/triggers.sql index 9f2ed88f20..683a5f1e5c 100644 --- a/src/test/regress/sql/triggers.sql +++ b/src/test/regress/sql/triggers.sql @@ -1272,30 +1272,6 @@ drop table upsert; drop function upsert_before_func(); drop function upsert_after_func(); --- --- Verify that triggers are prevented on partitioned tables if they would --- access row data (ROW and STATEMENT-with-transition-table) --- - -create table my_table (i int) partition by list (i); -create table my_table_42 partition of my_table for values in (42); -create function my_trigger_function() returns trigger as $$ begin end; $$ language plpgsql; -create trigger my_trigger before update on my_table for each row execute procedure my_trigger_function(); -create trigger my_trigger after update on my_table referencing old table as old_table - for each statement execute procedure my_trigger_function(); - --- --- Verify that triggers are allowed on partitions --- -create trigger my_trigger before update on my_table_42 for each row execute procedure my_trigger_function(); -drop trigger my_trigger on my_table_42; -create trigger my_trigger after update on my_table_42 referencing old table as old_table - for each statement execute procedure my_trigger_function(); -drop trigger my_trigger on my_table_42; -drop function my_trigger_function(); -drop table my_table_42; -drop table my_table; - -- -- Verify that triggers with transition tables are not allowed on -- views @@ -1391,3 +1367,344 @@ copy parted_stmt_trig1(a) from stdin; \. drop table parted_stmt_trig, parted2_stmt_trig; + +-- +-- Test the interaction between transition tables and both kinds of +-- inheritance. We'll dump the contents of the transition tables in a +-- format that shows the attribute order, so that we can distinguish +-- tuple formats (though not dropped attributes). +-- + +create or replace function dump_insert() returns trigger language plpgsql as +$$ + begin + raise notice 'trigger = %, new table = %', + TG_NAME, + (select string_agg(new_table::text, ', ' order by a) from new_table); + return null; + end; +$$; + +create or replace function dump_update() returns trigger language plpgsql as +$$ + begin + raise notice 'trigger = %, old table = %, new table = %', + TG_NAME, + (select string_agg(old_table::text, ', ' order by a) from old_table), + (select string_agg(new_table::text, ', ' order by a) from new_table); + return null; + end; +$$; + +create or replace function dump_delete() returns trigger language plpgsql as +$$ + begin + raise notice 'trigger = %, old table = %', + TG_NAME, + (select string_agg(old_table::text, ', ' order by a) from old_table); + return null; + end; +$$; + +-- +-- Verify behavior of statement triggers on partition hierarchy with +-- transition tables. Tuples should appear to each trigger in the +-- format of the the relation the trigger is attached to. +-- + +-- set up a partition hierarchy with some different TupleDescriptors +create table parent (a text, b int) partition by list (a); + +-- a child matching parent +create table child1 partition of parent for values in ('AAA'); + +-- a child with a dropped column +create table child2 (x int, a text, b int); +alter table child2 drop column x; +alter table parent attach partition child2 for values in ('BBB'); + +-- a child with a different column order +create table child3 (b int, a text); +alter table parent attach partition child3 for values in ('CCC'); + +create trigger parent_insert_trig + after insert on parent referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger parent_update_trig + after update on parent referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger parent_delete_trig + after delete on parent referencing old table as old_table + for each statement execute procedure dump_delete(); + +create trigger child1_insert_trig + after insert on child1 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child1_update_trig + after update on child1 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child1_delete_trig + after delete on child1 referencing old table as old_table + for each statement execute procedure dump_delete(); + +create trigger child2_insert_trig + after insert on child2 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child2_update_trig + after update on child2 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child2_delete_trig + after delete on child2 referencing old table as old_table + for each statement execute procedure dump_delete(); + +create trigger child3_insert_trig + after insert on child3 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child3_update_trig + after update on child3 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child3_delete_trig + after delete on child3 referencing old table as old_table + for each statement execute procedure dump_delete(); + +-- insert directly into children sees respective child-format tuples +insert into child1 values ('AAA', 42); +insert into child2 values ('BBB', 42); +insert into child3 values (42, 'CCC'); + +-- update via parent sees parent-format tuples +update parent set b = b + 1; + +-- delete via parent sees parent-format tuples +delete from parent; + +-- insert into parent sees parent-format tuples +insert into parent values ('AAA', 42); +insert into parent values ('BBB', 42); +insert into parent values ('CCC', 42); + +-- delete from children sees respective child-format tuples +delete from child1; +delete from child2; +delete from child3; + +-- copy into parent sees parent-format tuples +copy parent (a, b) from stdin; +AAA 42 +BBB 42 +CCC 42 +\. + +-- DML affecting parent sees tuples collected from children even if +-- there is no transition table trigger on the children +drop trigger child1_insert_trig on child1; +drop trigger child1_update_trig on child1; +drop trigger child1_delete_trig on child1; +drop trigger child2_insert_trig on child2; +drop trigger child2_update_trig on child2; +drop trigger child2_delete_trig on child2; +drop trigger child3_insert_trig on child3; +drop trigger child3_update_trig on child3; +drop trigger child3_delete_trig on child3; +delete from parent; + +-- copy into parent sees tuples collected from children even if there +-- is no transition-table trigger on the children +copy parent (a, b) from stdin; +AAA 42 +BBB 42 +CCC 42 +\. + +-- insert into parent with a before trigger on a child tuple before +-- insertion, and we capture the newly modified row in parent format +create or replace function intercept_insert() returns trigger language plpgsql as +$$ + begin + new.b = new.b + 1000; + return new; + end; +$$; + +create trigger intercept_insert_child3 + before insert on child3 + for each row execute procedure intercept_insert(); + + +-- insert, parent trigger sees post-modification parent-format tuple +insert into parent values ('AAA', 42), ('BBB', 42), ('CCC', 66); + +-- copy, parent trigger sees post-modification parent-format tuple +copy parent (a, b) from stdin; +AAA 42 +BBB 42 +CCC 234 +\. + +drop table child1, child2, child3, parent; +drop function intercept_insert(); + +-- +-- Verify prohibition of row triggers with transition triggers on +-- partitions +-- +create table parent (a text, b int) partition by list (a); +create table child partition of parent for values in ('AAA'); + +-- adding row trigger with transition table fails +create trigger child_row_trig + after insert on child referencing new table as new_table + for each row execute procedure dump_insert(); + +-- detaching it first works +alter table parent detach partition child; + +create trigger child_row_trig + after insert on child referencing new table as new_table + for each row execute procedure dump_insert(); + +-- but now we're not allowed to reattach it +alter table parent attach partition child for values in ('AAA'); + +-- drop the trigger, and now we're allowed to attach it again +drop trigger child_row_trig on child; +alter table parent attach partition child for values in ('AAA'); + +drop table child, parent; + +-- +-- Verify behavior of statement triggers on (non-partition) +-- inheritance hierarchy with transition tables; similar to the +-- partition case, except there is no rerouting on insertion and child +-- tables can have extra columns +-- + +-- set up inheritance hierarchy with different TupleDescriptors +create table parent (a text, b int); + +-- a child matching parent +create table child1 () inherits (parent); + +-- a child with a different column order +create table child2 (b int, a text); +alter table child2 inherit parent; + +-- a child with an extra column +create table child3 (c text) inherits (parent); + +create trigger parent_insert_trig + after insert on parent referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger parent_update_trig + after update on parent referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger parent_delete_trig + after delete on parent referencing old table as old_table + for each statement execute procedure dump_delete(); + +create trigger child1_insert_trig + after insert on child1 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child1_update_trig + after update on child1 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child1_delete_trig + after delete on child1 referencing old table as old_table + for each statement execute procedure dump_delete(); + +create trigger child2_insert_trig + after insert on child2 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child2_update_trig + after update on child2 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child2_delete_trig + after delete on child2 referencing old table as old_table + for each statement execute procedure dump_delete(); + +create trigger child3_insert_trig + after insert on child3 referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger child3_update_trig + after update on child3 referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger child3_delete_trig + after delete on child3 referencing old table as old_table + for each statement execute procedure dump_delete(); + +-- insert directly into children sees respective child-format tuples +insert into child1 values ('AAA', 42); +insert into child2 values (42, 'BBB'); +insert into child3 values ('CCC', 42, 'foo'); + +-- update via parent sees parent-format tuples +update parent set b = b + 1; + +-- delete via parent sees parent-format tuples +delete from parent; + +-- reinsert values into children for next test... +insert into child1 values ('AAA', 42); +insert into child2 values (42, 'BBB'); +insert into child3 values ('CCC', 42, 'foo'); + +-- delete from children sees respective child-format tuples +delete from child1; +delete from child2; +delete from child3; + +-- copy into parent sees parent-format tuples (no rerouting, so these +-- are really inserted into the parent) +copy parent (a, b) from stdin; +AAA 42 +BBB 42 +CCC 42 +\. + +-- DML affecting parent sees tuples collected from children even if +-- there is no transition table trigger on the children +drop trigger child1_insert_trig on child1; +drop trigger child1_update_trig on child1; +drop trigger child1_delete_trig on child1; +drop trigger child2_insert_trig on child2; +drop trigger child2_update_trig on child2; +drop trigger child2_delete_trig on child2; +drop trigger child3_insert_trig on child3; +drop trigger child3_update_trig on child3; +drop trigger child3_delete_trig on child3; +delete from parent; + +drop table child1, child2, child3, parent; + +-- +-- Verify prohibition of row triggers with transition triggers on +-- inheritance children +-- +create table parent (a text, b int); +create table child () inherits (parent); + +-- adding row trigger with transition table fails +create trigger child_row_trig + after insert on child referencing new table as new_table + for each row execute procedure dump_insert(); + +-- disinheriting it first works +alter table child no inherit parent; + +create trigger child_row_trig + after insert on child referencing new table as new_table + for each row execute procedure dump_insert(); + +-- but now we're not allowed to make it inherit anymore +alter table child inherit parent; + +-- drop the trigger, and now we're allowed to make it inherit again +drop trigger child_row_trig on child; +alter table child inherit parent; + +drop table child, parent; + +-- cleanup +drop function dump_insert(); +drop function dump_update(); +drop function dump_delete(); -- cgit v1.2.3 From 2dca03439f1f88e9102aa6bf613b434be0697dae Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 2 Jul 2017 00:10:57 -0400 Subject: doc: Document that logical replication supports synchronous replication Update the documentation a bit to include that logical replication as well as other and third-party replication clients can participate in synchronous replication. --- doc/src/sgml/config.sgml | 29 +++++++++++++++++++---------- doc/src/sgml/high-availability.sgml | 12 ++++++++++++ doc/src/sgml/logical-replication.sgml | 8 ++++++++ 3 files changed, 39 insertions(+), 10 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 2485e6190d..80d1679b14 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -3074,7 +3074,19 @@ include_dir 'conf.d' (as shown by a state of streaming in the pg_stat_replication view). - Specifying more than one standby names can allow very high availability. + Specifying more than one synchronous standby can allow for very high + availability and protection against data loss. + + + The name of a standby server for this purpose is the + application_name setting of the standby, as set in the + standby's connection information. In case of a physical replication + standby, this should be set in the primary_conninfo + setting in recovery.conf; the default + is walreceiver. For logical replication, this can + be set in the connection information of the subscription, and it + defaults to the subscription name. For other replication stream + consumers, consult their documentation. This parameter specifies a list of standby servers using @@ -3136,15 +3148,12 @@ ANY num_sync ( + + A logical replication subscription can be a standby for synchronous + replication (see ). The standby + name is by default the subscription name. An alternative name can be + specified as application_name in the connection + information of the subscription. + + Subscriptions are dumped by pg_dump if the current user is a superuser. Otherwise a warning is written and subscriptions are -- cgit v1.2.3 From 42794d6749f24636efbb198db17c30c63df10900 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Mon, 3 Jul 2017 16:16:35 +0100 Subject: Don't mention SSL methods that aren't reachable in docs Author: Michael Paquier --- doc/src/sgml/libpq.sgml | 3 +-- doc/src/sgml/sslinfo.sgml | 4 ++-- 2 files changed, 3 insertions(+), 4 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 1b3d55b8b9..f0167a64bc 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -1984,8 +1984,7 @@ const char *PQsslAttribute(const PGconn *conn, const char *attribute_name); SSL/TLS version in use. Common values - are "SSLv2", "SSLv3", - "TLSv1", "TLSv1.1" + are "TLSv1", "TLSv1.1" and "TLSv1.2", but an implementation may return other strings if some other protocol is used. diff --git a/doc/src/sgml/sslinfo.sgml b/doc/src/sgml/sslinfo.sgml index a4ce7e8d2c..7bda33efa3 100644 --- a/doc/src/sgml/sslinfo.sgml +++ b/doc/src/sgml/sslinfo.sgml @@ -47,8 +47,8 @@ - Returns the name of the protocol used for the SSL connection (e.g. SSLv2, - SSLv3, or TLSv1). + Returns the name of the protocol used for the SSL connection (e.g. TLSv1.0 + TLSv1.1, or TLSv1.2). -- cgit v1.2.3 From 012d83f57aff973a73214262f3d87105786e3500 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 3 Jul 2017 23:37:53 -0400 Subject: Document how logical replication deals with statement triggers Reported-by: Константин Евтеев Bug: #14699 --- doc/src/sgml/logical-replication.sgml | 7 +++++++ 1 file changed, 7 insertions(+) (limited to 'doc/src') diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml index 6576272458..fa0bb56b7b 100644 --- a/doc/src/sgml/logical-replication.sgml +++ b/doc/src/sgml/logical-replication.sgml @@ -429,6 +429,13 @@ and constraints. + + The logical replication apply process currently only fires row triggers, + not statement triggers. The initial table synchronization, however, is + implemented like a COPY command and thus fires both row + and statement triggers for INSERT. + + Initial Snapshot -- cgit v1.2.3 From 5191e357cf22e200a9baaf97bbe8a07ee2537537 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 4 Jul 2017 21:10:08 -0400 Subject: psql documentation fixes Update the documentation for \pset to mention columns|linestyle|pager_min_lines. Add various mentions of \pset command equivalences that were previously inconsistent. Author: Дилян Палаузов --- doc/src/sgml/ref/psql-ref.sgml | 30 +++++++++++++++++------------- src/bin/psql/help.c | 8 +++++--- 2 files changed, 22 insertions(+), 16 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index 9faa365481..c592edac60 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -68,7 +68,8 @@ PostgreSQL documentation Switches to unaligned output mode. (The default output mode is - otherwise aligned.) + otherwise aligned.) This is equivalent to \pset format + unaligned. @@ -371,8 +372,8 @@ EOF Use separator as the - record separator for unaligned output. This is equivalent to the - \pset recordsep command. + record separator for unaligned output. This is equivalent to + \pset recordsep. @@ -415,7 +416,8 @@ EOF Turn off printing of column names and result row count footers, - etc. This is equivalent to the \t command. + etc. This is equivalent to \t or + \pset tuples_only. @@ -427,7 +429,7 @@ EOF Specifies options to be placed within the HTML table tag. See - \pset for details. + \pset tableattr for details. @@ -522,8 +524,8 @@ EOF - Turn on the expanded table formatting mode. This is equivalent to the - \x command. + Turn on the expanded table formatting mode. This is equivalent to + \x or \pset expanded. @@ -545,7 +547,8 @@ EOF - Set the field separator for unaligned output to a zero byte. + Set the field separator for unaligned output to a zero byte. This is + equvalent to \pset fieldsep_zero. @@ -557,6 +560,7 @@ EOF Set the record separator for unaligned output to a zero byte. This is useful for interfacing, for example, with xargs -0. + This is equivalent to \pset recordsep_zero. @@ -1907,9 +1911,8 @@ Tue Oct 26 21:40:57 CEST 1999 Sets the field separator for unaligned query output. The default - is the vertical bar (|). See also - \pset for a generic way of setting output - options. + is the vertical bar (|). It is equivalent to + \pset fieldsep. @@ -2810,8 +2813,9 @@ lo_import 152801 There are various shortcut commands for \pset. See - \a, \C, \H, - \t, \T, and \x. + \a, \C, \f, + \H, \t, \T, + and \x. diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c index 8e08da79e9..b3dbb5946e 100644 --- a/src/bin/psql/help.c +++ b/src/bin/psql/help.c @@ -271,9 +271,11 @@ slashUsage(unsigned short int pager) fprintf(output, _(" \\H toggle HTML output mode (currently %s)\n"), ON(pset.popt.topt.format == PRINT_HTML)); fprintf(output, _(" \\pset [NAME [VALUE]] set table output option\n" - " (NAME := {format|border|expanded|fieldsep|fieldsep_zero|footer|null|\n" - " numericlocale|recordsep|recordsep_zero|tuples_only|title|tableattr|pager|\n" - " unicode_border_linestyle|unicode_column_linestyle|unicode_header_linestyle})\n")); + " (NAME := {border|columns|expanded|fieldsep|fieldsep_zero|\n" + " footer|format|linestyle|null|numericlocale|pager|\n" + " pager_min_lines|recordsep|recordsep_zero|tableattr|title|\n" + " tuples_only|unicode_border_linestyle|\n" + " unicode_column_linestyle|unicode_header_linestyle})\n")); fprintf(output, _(" \\t [on|off] show only rows (currently %s)\n"), ON(pset.popt.topt.tuples_only)); fprintf(output, _(" \\T [STRING] set HTML tag attributes, or unset if none\n")); -- cgit v1.2.3 From 1bac5f552a25aca3aa2ef1d404f7cdf7788c34d8 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 1 May 2017 12:10:17 -0400 Subject: pg_ctl: Make failure to complete operation a nonzero exit If an operation being waited for does not complete within the timeout, then exit with a nonzero exit status. This was previously handled inconsistently. --- doc/src/sgml/ref/pg_ctl-ref.sgml | 7 +++++++ src/bin/pg_ctl/pg_ctl.c | 8 ++++++-- 2 files changed, 13 insertions(+), 2 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/ref/pg_ctl-ref.sgml b/doc/src/sgml/ref/pg_ctl-ref.sgml index 71e52c4c35..12fa011c4e 100644 --- a/doc/src/sgml/ref/pg_ctl-ref.sgml +++ b/doc/src/sgml/ref/pg_ctl-ref.sgml @@ -412,6 +412,13 @@ PostgreSQL documentation pg_ctl returns an exit code based on the success of the startup or shutdown. + + + If the operation does not complete within the timeout (see + option ), then pg_ctl exits with + a nonzero exit status. But note that the operation might continue in + the background and eventually succeed. + diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c index 0c65196bda..4e02c4cea1 100644 --- a/src/bin/pg_ctl/pg_ctl.c +++ b/src/bin/pg_ctl/pg_ctl.c @@ -840,7 +840,9 @@ do_start(void) break; case POSTMASTER_STILL_STARTING: print_msg(_(" stopped waiting\n")); - print_msg(_("server is still starting up\n")); + write_stderr(_("%s: server did not start in time\n"), + progname); + exit(1); break; case POSTMASTER_FAILED: print_msg(_(" stopped waiting\n")); @@ -1166,7 +1168,9 @@ do_promote(void) else { print_msg(_(" stopped waiting\n")); - print_msg(_("server is still promoting\n")); + write_stderr(_("%s: server did not promote in time\n"), + progname); + exit(1); } } else -- cgit v1.2.3 From 485c515d0176d3210b5405ef23be8ed32cf5c93a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 9 Jul 2017 12:51:15 -0400 Subject: Doc: fix backwards description of visibility map's all-frozen data. Thinko in commit a892234f8. Vik Fearing Discussion: https://postgr.es/m/b6aaa23d-e26f-6404-a30d-e89431492d5d@2ndquadrant.com --- doc/src/sgml/storage.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'doc/src') diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml index a156693ec8..aed2cf8bca 100644 --- a/doc/src/sgml/storage.sgml +++ b/doc/src/sgml/storage.sgml @@ -630,7 +630,7 @@ can be used to examine the information stored in free space maps. Each heap relation has a Visibility Map (VM) to keep track of which pages contain only tuples that are known to be visible to all active transactions; it also keeps track of which pages contain -only unfrozen tuples. It's stored +only frozen tuples. It's stored alongside the main relation data in a separate relation fork, named after the filenode number of the relation, plus a _vm suffix. For example, if the filenode of a relation is 12345, the VM is stored in a file called -- cgit v1.2.3 From fad7873c8c9fae04accbdd3b7c226f451e9ab3b9 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 9 Jul 2017 15:27:21 -0400 Subject: Doc: update v10 release notes through today. --- doc/src/sgml/release-10.sgml | 134 ++++++++++++++++++++++++++++++++++++++----- 1 file changed, 119 insertions(+), 15 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 8b1b66d874..2b64d525ae 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -6,7 +6,7 @@ Release date: - 2017-09-XX (CURRENT AS OF 2017-05-14) + 2017-??-?? (current as of 2017-07-09, commit 3381898f9) @@ -20,7 +20,7 @@ - + (to be written) @@ -49,6 +49,7 @@ pg_upgrade-ed hash indexes from previous major @@ -200,7 +201,7 @@ --> Make all actions wait - by default for completion (Peter Eisentraut) + for completion by default (Peter Eisentraut) @@ -299,6 +300,23 @@ + + Don't downcase unquoted text + within and related + settings (QL Zhuo) + + + + These settings are really lists of file names, but they were + previously treated as lists of SQL identifiers, which have different + parsing rules. + + + + + @@ -712,6 +730,7 @@ Add multi-column optimizer statistics to compute the correlation @@ -812,6 +831,7 @@ Use uniqueness guarantees to optimize certain join types (David @@ -851,6 +871,8 @@ Create a - Sequence metadata includes start, increment, etc, - which is now transactional. Sequence counters are - still stored in separate heap relations. Also add pg_sequences view - to show all sequences. + Sequence metadata includes the start, increment, etc, properties. + ALTER SEQUENCE updates are now fully transactional, + implying that the sequence is locked until commit. Sequences' + current values are still stored in individual table files, and + the nextval() and setval() functions + remain nontransactional. + + + + Also + add pg_sequences + view to show all sequences. @@ -2056,15 +2085,17 @@ - Allow libpq to connect to multiple specified host names + Allow specification of multiple host names in libpq + connection strings and URIs (Robert Haas) - libpq will connect with the first responsive host name. + libpq will connect to the first responsive server in the list. @@ -2435,7 +2466,7 @@ Improve fsync handling of pg_basebackup and @@ -2495,12 +2526,60 @@ 2016-10-19 [caf936b09] pg_ctl: Add long option for -o --> - Add long options flag for pg_ctl - options (Peter Eisentraut) + Add long option for pg_ctl server options + ( + + + + + + Make pg_ctl start --wait detect server-ready by + watching postmaster.pid, not by attempting connections + (Tom Lane) + + + + The postmaster has been changed to report its ready-for-connections + status in postmaster.pid, and pg_ctl + now examines that file to detect whether startup is complete. + This is more efficient and reliable than the old method, and it + eliminates postmaster log entries about rejected connection + attempts during startup. + + + + + + + Reduce pg_ctl's reaction time when waiting for + postmaster start/stop (Tom Lane) - It is called + + + + + + Ensure that pg_ctl exits with nonzero status if an + operation being waited for does not complete within the timeout + (Peter Eisentraut) + + + + The start and promote operations now return + exit status 1, not 0, in such cases. The stop operation + has always done that. @@ -2531,6 +2610,30 @@ + + Improve behavior of pgindent + (Piotr Stefaniak, Tom Lane) + + + + We have switched to a new version of pg_bsd_indent + based on recent improvements made by the FreeBSD project. This + fixes numerous small bugs that led to odd C code formatting + decisions. Most notably, lines within parentheses (such as in a + multi-line function call) are now uniformly indented to match the + opening paren, even if that would result in code extending past the + right margin. + + + + + @@ -2638,6 +2741,7 @@ Add more robust random number generators to be used for -- cgit v1.2.3 From 749eceff4a1f9740391b126c81af9fd4bf3b1eaa Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 9 Jul 2017 20:11:21 -0400 Subject: Doc: desultory copy-editing for v10 release notes. Improve many item descriptions, improve markup, relocate some items that seemed to be in the wrong section. --- doc/src/sgml/release-10.sgml | 825 ++++++++++++++++++++++--------------------- 1 file changed, 432 insertions(+), 393 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 2b64d525ae..debaa80099 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -52,38 +52,35 @@ 2017-05-19 [a95410e2e] pg_upgrade: Handle hash index upgrades more smoothly. --> - pg_upgrade-ed hash indexes from previous major - PostgreSQL versions must be rebuilt. + Hash indexes must be rebuilt after pg_upgrade-ing + from any previous major PostgreSQL version (Mithun + Cy, Robert Haas) - Major hash storage improvements necessitated this requirement. - - - - - - - Change the default log - directory from pg_log to log (Andreas - Karlsson) + Major hash index improvements necessitated this requirement. + pg_upgrade will create a script to assist with this. - Rename pg_xlog to pg_wal (Michael Paquier) + Rename write-ahead log directory pg_xlog + to pg_wal, and rename transaction + status directory pg_clog to pg_xact + (Michael Paquier) - This prevents the write-ahead log directory from being confused as - containing server activity logs, and erroneously truncated. + Users have occasionally thought that these directories contained only + inessential log files, and proceeded to remove write-ahead log files + or transaction status files manually, causing irrecoverable data + loss. These name changes are intended to discourage such errors in + future. @@ -103,18 +100,10 @@ For example, pg_switch_xlog() becomes pg_switch_wal(), pg_receivexlog becomes pg_receivewal, and - - - - - - Rename transaction status directory pg_clog directory - to pg_xact (Michael Paquier) + becomes @@ -126,6 +115,10 @@ Rename WAL-related functions and views to use lsn instead of location (David Rowley) + + + There was previously an inconsistent mixture of the two terminologies. + @@ -161,17 +154,17 @@ 2017-01-04 [9a4d51077] Make wal streaming the default mode for pg_basebackup --> - Have stream the + Make stream the WAL needed to restore the backup by default (Magnus Hagander) - This changes the pg_basebackup + This changes pg_basebackup's @@ -181,7 +174,8 @@ --> Change how logical replication - uses pg_hba.conf. + uses pg_hba.conf + (Peter Eisentraut) @@ -190,7 +184,7 @@ of this release, logical replication matches a normal entry with a database name or keywords such as all. Physical replication continues to use the replication keyword. - Since built-in logical replication is new in this release as well, this + Since built-in logical replication is new in this release, this change only affects users of third-party logical replication plugins. @@ -220,14 +214,15 @@ - The server-side variable + The server parameter no longer supports off or plain. - The UNENCRYPTED option is no longer supported for + The UNENCRYPTED option is no longer supported in CREATE/ALTER USER ... PASSSWORD. Similarly, the - @@ -243,13 +238,13 @@ - This makes a backwards-incompatible change to the handling of - composite types in arrays. Previously, you could return an array of - composite types as "[[col1, col2], [col1, col2]]", but now that is - interpreted as a two- dimensional array. Composite types in arrays - must now be returned as Python tuples, not lists, to resolve the - ambiguity. I.e. "[(col1, col2), (col1, col2)]". See the documentation - for more details. CLARIFY + This feature requires a backwards-incompatible change to the handling + of arrays of composite types in PL/Python. Previously, you could + return an array of composite values by writing, e.g., [[col1, + col2], [col1, col2]]; but now that is interpreted as a + two-dimensional array. Composite types in arrays must now be written + as Python tuples, not lists, to resolve the ambiguity; that is, + write [(col1, col2), (col1, col2)] instead. @@ -258,27 +253,41 @@ 2017-02-27 [817f2a586] Remove PL/Tcl's "module" facility. --> - Remove PL/Tcl's "module" auto-loading facility (Tom Lane) + Remove PL/Tcl's module auto-loading facility (Tom Lane) - Replaced by new PL/Tcl startup GUCs. + This functionality has been replaced by new server + parameters + and , which are easier to use + and more similar to features available in other PLs. + + Change the default value of the + server parameter from pg_log to log + (Andreas Karlsson) + + + + + - Remove sql_inheritance GUC (Robert Haas) + Remove sql_inheritance server parameter (Robert Haas) - Changing this from the default value caused queries referencing - parent tables to not include children tables. The SQL + Changing this setting from the default value caused queries referencing + parent tables to not include child tables. The SQL standard requires such behavior and this has been the default since - PostgreSQL 7.1. + PostgreSQL 7.1. @@ -287,14 +296,14 @@ 2017-02-15 [51ee6f316] Replace min_parallel_relation_size with two new GUCs. --> - Add GUCs - and to control - parallel operation (Amit Kapila, Robert Haas) + Add + and server + parameters to control parallel queries (Amit Kapila, Robert Haas) - This replaces min_parallel_relation_size, which was - too generic. + These replace min_parallel_relation_size, which was + found to be too generic. @@ -305,7 +314,7 @@ Don't downcase unquoted text within and related - settings (QL Zhuo) + server parameters (QL Zhuo) @@ -325,8 +334,9 @@ - Users needing dump support for pre-8.0 servers need to use dump - binaries from PostgreSQL 9.6. + Users needing to dump from pre-8.0 servers will need to use dump + programs from PostgreSQL 9.6 or earlier. The + resulting output should still load successfully into newer servers. @@ -335,13 +345,13 @@ 2017-02-23 [b6aa17e0a] De-support floating-point timestamps. --> - Remove support for floating-point datetimes/timestamps (Tom Lane) + Remove support for floating-point timestamps and intervals (Tom Lane) This removes configure's @@ -354,7 +364,8 @@ - This protocol hasn't had client support since PostgreSQL 6.3. + This protocol hasn't had client support + since PostgreSQL 6.3. @@ -363,12 +374,12 @@ 2017-02-13 [7ada2d31f] Remove contrib/tsearch2. --> - Remove contrib/tsearch2 (Robert Haas) + Remove contrib/tsearch2 module (Robert Haas) - This removes compatibility with the contrib version of full text - search that shipped in pre-8.3 PostgreSQL versions. + This module provided compatibility with the version of full text + search that shipped in pre-8.3 PostgreSQL releases. @@ -377,8 +388,14 @@ 2017-03-23 [50c956add] Remove createlang and droplang --> - Remove createlang and droplang command-line applications (Peter - Eisentraut) + Remove createlang and droplang + command-line applications (Peter Eisentraut) + + + + These were deprecated since PostgreSQL 9.1. Instead, + use CREATE EXTENSION and DROP EXTENSION + directly. @@ -390,15 +407,10 @@ Remove support for version-0 function calling conventions (Andres Freund) - - - - Remove SCO and Unixware ports (Tom Lane) + Extensions providing C-coded functions must now conform to version 1 + calling conventions. Version 0 has been deprecated since 2001. @@ -411,7 +423,7 @@ Below you will find a detailed account of the changes between - PostgreSQL 10 and the previous major + PostgreSQL 10 and the previous major release. @@ -435,8 +447,8 @@ - Allows B-tree index pages to be checked by separate parallel - workers. + This change allows B-tree index pages to be searched by separate + parallel workers. @@ -499,13 +511,13 @@ 2016-12-05 [2b959d495] Reduce the default for max_worker_processes back to 8. --> - Add GUC + Add server parameter to limit the number of worker processes that can be used for query parallelism (Julien Rouhaud) - This can be set lower than to reserve worker processes for purposes other than parallel queries. @@ -555,7 +567,7 @@ Specifically, a new CREATE - INDEX option allows auto-summarizion of the + INDEX option allows auto-summarization of the previous BRIN page range when a new page range is created. @@ -572,7 +584,7 @@ - New SQL function SQL function brin_summarize_range() updates BRIN index summarization for a specified range and brin_desummarize_range() removes it. @@ -619,8 +631,8 @@ - This makes hash indexes crash-safe and replicated, and removes - the warning message about their use. + This makes hash indexes crash-safe and replicatable. + The former warning message about their use is removed. @@ -630,7 +642,7 @@ 2017-02-07 [293e24e50] Cache hash index's metapage in rel->rd_amcache. --> - Improve hash bucket split performance by reducing locking + Improve hash index bucket split performance by reducing locking requirements (Amit Kapila, Mithun Cy) @@ -653,7 +665,7 @@ 2017-03-15 [6977b8b7f] Port single-page btree vacuum logic to hash indexes. --> - Allow single-page hash pruning (Ashutosh Sharma) + Allow page-at-a-time hash index pruning (Ashutosh Sharma) @@ -671,33 +683,35 @@ - Only check for REFERENCES permission on referenced - tables (Tom Lane) + Reduce locking required for adding values to enum types (Andrew + Dunstan, Tom Lane) - Previously REFERENCES permission on the referencing - table was also required. + Previously it was impossible to run ALTER TYPE ... ADD + VALUE in a transaction block unless the enum type was created + in the same block. Now, only references to uncommitted enum + values from other transactions are prohibited. - Reduce locking required for adding values to enum types (Andrew - Dunstan, Tom Lane) + Reduce locking required to change table parameters (Simon Riggs, + Fabrízio Mello) - Previously it was impossible to run ALTER TYPE ... ADD - VALUE in a transaction block unless the enum type was created - in the same block. Now, only references to uncommitted enum - values from other transactions is prohibited. + For example, changing a table's setting can now be done + with a more lightweight lock. @@ -711,7 +725,8 @@ - The new settings are and . @@ -739,22 +754,12 @@ - New commands are CREATE, - ALTER, and - DROP STATISTICS. - This is helpful in - estimating query memory usage and when combining the statistics - from individual columns. - - - - - - - Improve planner matching of boolean indexes (Tom Lane) + New commands are CREATE STATISTICS, + ALTER STATISTICS, and + DROP STATISTICS. + This feature is helpful in estimating query memory usage and when + combining the statistics from individual columns. @@ -763,13 +768,14 @@ 2017-01-18 [215b43cdc] Improve RLS planning by marking individual quals with se --> - Improve performance of queries referencing row-level security + Improve performance of queries affected by row-level security restrictions (Tom Lane) - The optimizer now has more flexibility in reordering executor - behavior. + The optimizer now has more knowledge about where it can place RLS + filter conditions, allowing better plans to be generated while still + enforcing the RLS conditions safely. @@ -787,11 +793,10 @@ 2016-09-02 [9cca11c91] Speed up SUM calculation in numeric aggregates. --> - Speed up SUM() calculations (Heikki Linnakangas) - - - - This uses an optimized numeric accumulator. + Speed up aggregate functions that calculate a running sum + using numeric-type arithmetic, including some variants + of SUM(), AVG(), + and STDDEV() (Heikki Linnakangas) @@ -800,7 +805,7 @@ 2017-03-13 [aeed17d00] Use radix tree for character encoding conversions. --> - Improve the performance of character encoding conversions by + Improve performance of character encoding conversions by using radix trees (Kyotaro Horiguchi, Heikki Linnakangas) @@ -810,7 +815,7 @@ 2017-03-25 [b8d7f053c] Faster expression evaluation and targetlist projection. --> - Reduce the function call overhead during query execution (Andres + Reduce expression evaluation overhead during query execution (Andres Freund) @@ -824,7 +829,8 @@ 2017-03-27 [b5635948a] Support hashed aggregation with grouping sets. --> - Improve the performance of grouping sets (Andrew Gierth) + Allow hashed aggregation to be used with grouping sets (Andrew + Gierth) @@ -901,9 +907,9 @@ 2017-03-08 [f9b1a0dd4] Expose explain's SUMMARY option --> - Allow explicit control over EXPLAIN's display of planning and - execution time (Ashutosh Bapat) + Allow explicit control + over EXPLAIN's display + of planning and execution time (Ashutosh Bapat) @@ -952,13 +958,13 @@ 2016-10-17 [7d3235ba4] By default, set log_line_prefix = '%m [%p] '. --> - Change default to include - current timestamp with milliseconds and the process id (Christoph - Berg) + Change the default value of + to include current timestamp (with milliseconds) and the process ID + in each line of postmaster log output (Christoph Berg) - The previous default was not to output a prefix. + The previous default was an empty prefix. @@ -972,13 +978,11 @@ - The new functions are pg_ls_logdir() - and - pg_ls_waldir() - and can be - executed by non-super users with the proper permissions. + The new functions + are pg_ls_logdir() + and pg_ls_waldir() + and can be executed by non-superusers with the proper + permissions. @@ -1000,12 +1004,13 @@ 2017-03-14 [2b32ac2a5] Include port number when logging successful binding to a --> - Report the address and port number of successful startup socket - binding in the server logs (Tom Lane) + Report the address and port number of each listening socket + in the server log during postmaster startup (Tom Lane) - Also, report bind socket failure details in the server logs. + Also, when logging failure to bind a listening socket, include + the specific address we attempted to bind to. @@ -1030,7 +1035,7 @@ Reduce message verbosity of lower-numbered debug levels controlled by - (Robert Haas) + (Robert Haas) @@ -1075,30 +1080,18 @@ - - Show auxiliary processes and background workers in - pg_stat_activity (Kuntal Ghosh) - - - - New column backend_type identifies the process - type. - - - - - - Display walsender processes in pg_stat_activity - (Michael Paquier) + Show auxiliary processes, background workers, and walsender + processes in pg_stat_activity (Kuntal Ghosh, + Michael Paquier) - This simplifies monitoring. + This simplifies monitoring. A new + column backend_type identifies the process type. @@ -1107,7 +1100,7 @@ 2017-02-22 [4c728f382] Pass the source text for a parallel query to the workers --> - Allow pg_stat_activity to show the source query + Allow pg_stat_activity to show the SQL query being executed by parallel workers (Rafia Sabih) @@ -1120,7 +1113,7 @@ Rename pg_stat_activity.wait_event_type values LWLockTranche and - LWLockNamed to LWLock (Robert Haas) + LWLockNamed to LWLock (Robert Haas) @@ -1162,7 +1155,7 @@ 2016-09-28 [babe05bc2] Turn password_encryption GUC into an enum. --> - Change GUC + Change the server parameter from boolean to enum (Michael Paquier) @@ -1223,37 +1216,13 @@ This allows SSL to be reconfigured without a server - restart by using pg_ctl reload, SELECT + restart, by using pg_ctl reload, SELECT pg_reload_conf(), or sending a SIGHUP signal. - Reload SSL configuration updates do not work if the - SSL key requires a passphrase. - - - - - - - Remove documented restriction about using large shared buffers on - Windows (Takayuki Tsunakawa) - - - - - - - Reduce locking required to change table parameters (Simon Riggs, - Fabrízio Mello) - - - - For example, changing a table's setting can now be done - with a more lightweight lock. + However, reloading the SSL configuration does not work + if the server's SSL key requires a passphrase, as there + is no way to re-prompt for the passphrase. The original + configuration will apply for the life of the postmaster in that + case. @@ -1303,7 +1272,7 @@ 2016-12-22 [6ef2eba3f] Skip checkpoints, archiving on idle systems. --> - Prevent checkpoints and WAL archiving on + Prevent unnecessary checkpoints and WAL archiving on otherwise-idle systems (Michael Paquier) @@ -1314,7 +1283,7 @@ 2017-03-14 [bb4a39637] hash: Support WAL consistency checking. --> - Add GUC + Add server parameter to add details to WAL that can be sanity-checked on the standby (Kuntal Ghosh, Robert Haas) @@ -1365,9 +1334,10 @@ - This allows more fine-grained replication options, including - replication between different major versions of PostgreSQL and - selective-table replication. + Logical replication allows more flexibility than physical + replication does, including replication between different major + versions of PostgreSQL and selective-table + replication. @@ -1396,15 +1366,16 @@ 2017-05-02 [34fc61673] Change hot_standby default value to 'on' --> - Reduce configuration necessary to perform streaming backup and - replication (Magnus Hagander, Dang Minh Huong) + Reduce configuration changes necessary to perform streaming backup + and replication (Magnus Hagander, Dang Minh Huong) - Specifically, defaults were changed for , , , and . + linkend="guc-hot-standby"> to make them suitable for these usages + out-of-the-box. @@ -1446,15 +1417,15 @@ 2016-09-03 [35250b6ad] New recovery target recovery_target_lsn --> - Add specification of a Log Sequence Number (LSN) - stopping point in - recovery.conf - (Michael Paquier) + Allow specification of the recovery stopping point by Log Sequence + Number (LSN) in + recovery.conf + (Michael Paquier) - Previously only specification of the stop name, time, timeline, - xid, and immediate were supported. + Previously the stopping point could only be selected by timestamp or + XID. @@ -1535,12 +1506,18 @@ 2016-11-22 [906bfcad7] Improve handling of "UPDATE ... SET (column_list) = row_ --> - Allow ROW to supply values to UPDATE ... SET - (column_list) (Tom Lane) + Allow standard row constructor syntax in UPDATE ... SET + (column_list) = row_constructor + (Tom Lane) - Also allow row values to be supplied by table.*. + The row_constructor can now begin with the + keyword ROW; previously that had to be omitted. Also, + an occurrence of table_name.* + within the row_constructor is now expanded into + multiple columns, as in other uses + of row_constructors. @@ -1549,12 +1526,14 @@ 2016-09-05 [c54159d44] Make locale-dependent regex character classes work for l --> - Fix regular expression locale class handling for bytes greater - than U+7FF (Tom Lane) + Fix regular expressions' character class handling for large character + codes, particularly Unicode characters above U+7FF + (Tom Lane) - Previously such classes were not recognized. + Previously, such characters were never recognized as belonging to + locale-dependent character classes such as [[:alpha:]]. @@ -1574,7 +1553,7 @@ Add table partitioning syntax that automatically creates partition constraints and - INSERT routing (Amit Langote) + handles routing of tuple insertions and updates (Amit Langote) @@ -1590,11 +1569,12 @@ --> Add AFTER trigger - transition table to record changed rows (Kevin Grittner) + transition tables to record changed rows (Kevin Grittner) - Transition table contents are accessible from server-side languages. + Transition tables are accessible from triggers written in + server-side languages. @@ -1614,6 +1594,26 @@ + + + + When creating a foreign-key constraint, check + for REFERENCES permission on only the referenced table + (Tom Lane) + + + + Previously REFERENCES permission on the referencing + table was also required. This appears to have stemmed from a + misreading of the SQL standard. Since creating a foreign key (or + any other type of) constraint requires ownership privilege on the + constrained table, additionally requiring REFERENCES + permission seems rather pointless. + + + - Allow COPY view FROM on views with INSTEAD + Allow COPY view + FROM source on views with INSTEAD INSERT triggers (Haribabu Kommi) - The triggers are fed the rows from COPY. + The triggers are fed the data rows read by COPY. @@ -1664,13 +1665,13 @@ --> Allow the specification of a function name without arguments in - DDL commands, when unique (Peter Eisentraut) + DDL commands, if it is unique (Peter Eisentraut) For example, allow DROP FUNCTION on a function name without arguments if there - is only one function with that name. This is required by the + is only one function with that name. This behavior is required by the SQL standard. @@ -1687,25 +1688,15 @@ - - Add IF NOT EXISTS for CREATE SERVER and - CREATE USER - MAPPING (Anastasia - Lubennikova) - - - - - - Add IF NOT EXISTS clause to CREATE COLLATION - (Peter Eisentraut) + Support IF NOT EXISTS + in CREATE SERVER, + CREATE USER MAPPING, + and CREATE COLLATION + (Anastasia Lubennikova, Peter Eisentraut) @@ -1715,7 +1706,7 @@ 2017-03-03 [9eb344faf] Allow vacuums to report oldestxmin --> - Have VACUUM VERBOSE report + Make VACUUM VERBOSE report the number of skipped frozen pages and oldest xmin (Masahiko Sawada, Simon Riggs) @@ -1734,11 +1725,6 @@ Improve speed of VACUUM's removal of trailing empty heap pages (Claudio Freire, Álvaro Herrera) - - - This information is also included in output. - @@ -1760,8 +1746,8 @@ - This is accessed via ts_headline() and - to_tsvector(). + The functions ts_headline() and + to_tsvector() can now be used on these data types. @@ -1770,14 +1756,15 @@ 2017-03-15 [c7a9fa399] Add support for EUI-64 MAC addresses as macaddr8 --> - Add support for EUI-64 MAC addresses as - MACADDR8 (Haribabu - Kommi) + Add support for EUI-64 MAC addresses, as a + new data type macaddr8 + (Haribabu Kommi) - This complements support for EUI-48 MAC - addresses as macaddr. + This complements the existing support + for EUI-48 MAC addresses + as macaddr. @@ -1823,8 +1810,9 @@ - Previously "anyarray" values were converted to JSON - strings. + Previously columns declared as anyarray (particularly those + in the pg_stats view) were converted to JSON + strings rather than arrays. @@ -1854,7 +1842,7 @@ 2016-09-14 [656df624c] Add overflow checks to money type input function --> - More strictly check the money type for overflow operations + Check for overflow in the money type's input function (Peter Eisentraut) @@ -1879,8 +1867,9 @@ - Similar to regexp_matches(), but only returns results - from the first match so it is easier use for simple cases. + This is similar to regexp_matches(), but it only + returns results from the first match so it does not need to return a + set, making it easier to use for simple cases. @@ -1889,14 +1878,10 @@ 2017-01-18 [d00ca333c] Implement array version of jsonb_delete and operator --> - Add version of jsonb's jsonb's delete operator that takes an array of keys to delete (Magnus Hagander) - - - The JSONB delete operator also now supports arrays. - @@ -1904,13 +1889,18 @@ 2017-04-06 [cf35346e8] Make json_populate_record and friends operate recursivel --> - Improve json_populate_record() - and friends operate recursively (Nikita Glukhov) + Make json_populate_record() + and related functions process JSON arrays and objects recursively + (Nikita Glukhov) - CLARIFY + With this change, array-type fields in the destination SQL type are + properly converted from JSON arrays, and composite-type fields are + properly converted from JSON objects. Previously, such cases would + fail because the text representation of the JSON value would be fed + to array_in() or record_in(), and its + syntax would not match what those input functions expect. @@ -1921,15 +1911,15 @@ Add function txid_current_ifassigned() - to return NULL if no transaction id has been assigned - (Craig Ringer) + to return the current transaction ID or NULL if no + transaction ID has been assigned (Craig Ringer) This is different from txid_current(), - which always returns a transaction id by assigning one if necessary. - This can be also run on standby servers. + which always returns a transaction ID, assigning one if necessary. + Unlike that function, this function can be run on standby servers. @@ -1944,7 +1934,7 @@ - This is useful for checking after an abrupt disconnection if + This is useful for checking after an abrupt disconnection whether your previous transaction committed and you just didn't receive the acknowledgement. @@ -1967,13 +1957,15 @@ 2016-09-28 [d3cd36a13] Make to_timestamp() and to_date() range-check fields of --> - Have to_timestamp() and to_date() check - input values for validity (Artur Zakirov) + Make to_timestamp() and to_date() reject + out-of-range input fields (Artur Zakirov) - Previously to_date('2009-06-40','YYYY-MM-DD') was - accepted and returned '2009-07-10'. It will now generate an error. + For example, + previously to_date('2009-06-40','YYYY-MM-DD') was + accepted and returned 2009-07-10. It will now generate + an error. @@ -1991,12 +1983,13 @@ 2017-03-27 [70ec3f1f8] PL/Python: Add cursor and execute methods to plan object --> - Allow the PL/Python plan object to call cursor and execute methods + Allow PL/Python's cursor() and execute() + functions to be called as methods of their plan-object arguments (Peter Eisentraut) - This is a more object oriented style. + This allows a more object-oriented programming style. @@ -2005,24 +1998,13 @@ 2016-12-13 [55caaaeba] Improve handling of array elements as getdiag_targets an --> - Allow PL/pgSQL's GET DIAGNOSTICS to retrieve values - into array elements (Tom Lane) - - - - - - - Remove SPI functions SPI_push(), - SPI_pop(), SPI_restore_connection() - as unnecessary (Tom Lane) + Allow PL/pgSQL's GET DIAGNOSTICS statement to retrieve + values into array elements (Tom Lane) - Their functionality now happens automatically. Also, - SPI_palloc() now requires an active connection. + Previously, a syntactic restriction prevented the target variable + from being an array element. @@ -2062,13 +2044,9 @@ 2017-03-07 [0d2b1f305] Invent start_proc parameters for PL/Tcl. --> - Add GUCs to allow initialization routines to be called - on PL/Tcl startup (Tom Lane) - - - - The GUCs are pltcl.start_proc and - pltclu.start_proc. + Add server parameters + and , to allow initialization + functions to be called on PL/Tcl startup (Tom Lane) @@ -2104,14 +2082,15 @@ 2016-11-29 [721f7bd3c] libpq: Add target_session_attrs parameter. --> - Allow the libpq connection string to request a read/write host + Allow libpq connection strings and URIs to request a read/write host, + that is a master server rather than a standby server (Victor Wagner, Mithun Cy) - This is useful when multiple libpq host names are - specified, and is controlled by libpq connection parameter + This is useful when multiple host names are + specified. It is controlled by libpq connection parameter @@ -2121,7 +2100,7 @@ 2017-01-24 [ba005f193] Allow password file name to be specified as a libpq conn --> - Allow password file name + Allow the password file name to be specified as a libpq connection parameter (Julian Markwort) @@ -2138,14 +2117,16 @@ Add function PQencryptPasswordConn() to allow creation of more types of encrypted passwords on the - client-side (Michael Paquier, Heikki Linnakangas) + client side (Michael Paquier, Heikki Linnakangas) - Previously only MD5 passwords could be created using MD5-encrypted passwords could be created + using PQencryptPassword(). This new function can also create SCRAM-SHA-256 passwords. + linkend="auth-pg-hba-conf">SCRAM-SHA-256-encrypted + passwords. @@ -2154,12 +2135,13 @@ 2016-08-16 [a7b5573d6] Remove separate version numbering for ecpg preprocessor. --> - ecpg preprocessor version changed from 4.12 to 10 (Tom Lane) + Change ecpg preprocessor version from 4.12 to 10 + (Tom Lane) - The ecpg version now matches the PostgreSQL distribution version - number. + Henceforth the ecpg version will match + the PostgreSQL distribution version number. @@ -2187,8 +2169,9 @@ - The new syntax uses \if, \elif, \else, and \endif. This is - particularly helpful for scripting. + This feature adds psql + meta-commands \if, \elif, \else, + and \endif. This is primarily helpful for scripting. @@ -2197,8 +2180,9 @@ 2017-03-07 [b2678efd4] psql: Add \gx command --> - Add psql \gx command to perform \g(execute) - in expanded mode (\x) (Christoph Berg) + Add psql \gx meta-command to execute + (\g) a query in expanded mode (\x) + (Christoph Berg) @@ -2207,13 +2191,14 @@ 2016-11-03 [a0f357e57] psql: Split up "Modifiers" column in \d and \dD --> - Improve psql's \d (relation) and \dD (domain) - commands to specify collation, nullable, and default in separate - columns (Peter Eisentraut) + Improve psql's \d (display relation) + and \dD (display domain) commands to show collation, + nullable, and default properties in separate columns (Peter + Eisentraut) - Previous they were in a single "Modifiers" column. + Previous they were shown in a single Modifiers column. @@ -2222,13 +2207,13 @@ 2017-04-01 [f833c847b] Allow psql variable substitution to occur in backtick co --> - Expand psql colon variables when used in - backtick-executed contexts (Tom Lane) + Expand psql variable references in + backtick-executed strings (Tom Lane) - This is particularly useful for the new psql - conditional branch support commands. + This is particularly useful in the new psql + conditional branch commands. @@ -2239,17 +2224,23 @@ 2017-02-02 [fd6cd6980] Clean up psql's behavior for a few more control variable --> - Prevent psql special variables from being set to + Prevent psql's special variables from being set to invalid values (Daniel Vérité, Tom Lane) - Previously setting psql special variables - to invalid values produced the default behavior. \set and - \unset of special variables now sets them to "on" and its - default value, rather than a zero-length string and undefined. - Also have \set always display values for FETCH_COUNT, - HISTSIZE, and IGNOREEOF. + Previously, setting one of psql's special variables + to an invalid value silently resulted in the default behavior. + \set on a special variable now fails if the proposed + new value is invalid. As a special exception, \set + with an empty or omitted new value, on a boolean-valued special + variable, still has the effect of setting the variable + to on; but now it actually acquires that value rather + than an empty string. \unset on a special variable now + explicitly sets the variable to its default value, which is also + the value it acquires at startup. In sum, a control variable now + always has a displayable value that reflects + what psql is actually doing. @@ -2287,8 +2278,8 @@ 2016-11-09 [41124a91e] pgbench: Allow the transaction log file prefix to be cha --> - Add pgbench option @@ -2297,8 +2288,13 @@ 2017-01-20 [cdc2a7047] Allow backslash line continuations in pgbench's meta com --> - Allow pgbench's meta commands to span multiple lines via a - line-terminating backslash (Fabien Coelho) + Allow pgbench's meta-commands to span multiple + lines (Fabien Coelho) + + + + A meta-command can now be continued onto the next line by writing + backslash-return. @@ -2320,7 +2316,7 @@ Add pg_receivewal - option @@ -2332,7 +2328,7 @@ Add pg_recvlogical option - ( @@ -2359,7 +2355,8 @@ - <link linkend="APP-PGDUMP"><application>pg_dump</></>, <link linkend="APP-PG-DUMPALL"><application>pg_dumpall</></>. + <title><link linkend="APP-PGDUMP"><application>pg_dump</></>, + <link linkend="APP-PG-DUMPALL"><application>pg_dumpall</></>, <link linkend="APP-PGRESTORE"><application>pg_restore</></> @@ -2373,7 +2370,7 @@ - This added a new @@ -2383,11 +2380,11 @@ --> Add - This suppresses the dumping of large objects. + This suppresses dumping of large objects. @@ -2397,12 +2394,13 @@ --> Add pg_dumpall option - - This allows easier dumping for less-privileged users. + This allows use of pg_dumpall by non-superusers; + without this option, it fails due to inability to read passwords. @@ -2411,13 +2409,15 @@ 2017-03-22 [96a7128b7] Sync pg_dump and pg_dumpall output --> - Issue fsync on the output files generated by + Issue fsync() on the output files generated by pg_dump and - pg_dumpall (Michael Paquier) + pg_dumpall (Michael Paquier) - This can be disabled with the @@ -2466,15 +2466,22 @@ - Improve fsync handling of pg_basebackup and + Be more careful about fsync'ing in all required places + in pg_basebackup and pg_receivewal (Michael Paquier) + + + - Also add @@ -2486,10 +2493,6 @@ Improve pg_basebackup's handling of which directories to skip (David Steele) - - - Also improve the documentation of skipped directories. - @@ -2598,13 +2601,19 @@ 2016-08-15 [ca9112a42] Stamp HEAD as 10devel. --> - New major version numbering (Peter Eisentraut, Tom Lane) + Change to two-part release version numbering (Peter Eisentraut, Tom + Lane) + Release numbers will now have two parts (e.g., 10.1) + rather than three (e.g., 9.6.3). Major versions will now increase just the first number, and minor - releases will increase just the second number. A third number - will no longer be used in PostgreSQL version numbers. + releases will increase just the second number. + Release branches will be referred to by single numbers + (e.g., 10 rather than 9.6). + This change is intended to reduce user confusion about what is a + major or minor release of PostgreSQL. @@ -2669,6 +2678,32 @@ + + Remove SPI functions SPI_push(), + SPI_pop(), SPI_push_conditional(), + SPI_pop_conditional(), + and SPI_restore_connection() as unnecessary (Tom Lane) + + + + Their functionality now happens automatically. There are now no-op + macros by these names so that external modules don't need to be + updated immediately, but eventually such calls should be removed. + + + + A side effect of this change is that SPI_palloc() and + allied functions now require an active SPI connection; they do not + degenerate to simple palloc() if there is none. That + previous behavior was not very useful and posed risks of unexpected + memory leaks. + + + + + Add more robust random number generators to be used for - cryptographic secure uses (Magnus Hagander, Michael Paquier, + cryptographically secure uses (Magnus Hagander, Michael Paquier, Heikki Linnakangas) - If no strong random number generator can be found, configure - will fail unless the configure - + + + + Remove SCO and Unixware ports (Tom Lane) + + + - Use XSLT to build the PostgreSQL documentation (Peter - Eisentraut) + Use XSLT to build the PostgreSQL + documentation (Peter Eisentraut) @@ -2827,7 +2872,7 @@ This reduces the amount of data that must be passed from the foreign data wrapper server, and offloads aggregate computation from the requesting server. The postgres_fdw is able to + linkend="postgres-fdw">postgres_fdw FDW is able to perform this optimization. There are also improvements in pushing down joins involving extensions. @@ -2932,7 +2977,7 @@ - This allows it be less disruptive when run on production systems. + This allows it to be less disruptive when run on production systems. @@ -2941,9 +2986,9 @@ 2017-02-03 [e759854a0] pgstattuple: Add pgstathashindex. --> - Add pgstathashindex() function to pgstattuple to view hash - index statistics (Ashutosh Sharma) + Add pgstattuple + function pgstathashindex() to view hash index + statistics (Ashutosh Sharma) @@ -2952,13 +2997,12 @@ 2016-09-29 [fd321a1df] Remove superuser checks in pgstattuple --> - Allow pgstattuple - to use GRANT permissions (Stephen Frost) + Use GRANT permissions to + control pgstattuple function usage (Stephen Frost) - This allows non-superusers to run these functions if permissions - allow. + This allows DBAs to allow non-superusers to run these functions. @@ -2967,8 +3011,7 @@ 2016-10-28 [d4b5d4cad] pgstattuple: Don't take heavyweight locks when examining --> - Reduce locking when pgstattuple examines hash + Reduce locking when pgstattuple examines hash indexes (Amit Kapila) @@ -2978,34 +3021,30 @@ 2017-03-17 [fef2bcdcb] pageinspect: Add page_checksum function --> - Add page_checksum() function to pageinspect (Tomas Vondra) + Add pageinspect + function page_checksum() to show a page's checksum + (Tomas Vondra) - Add hash index support to pageinspect (Jesper - Pedersen, Ashutosh Sharma) + Add pageinspect + function bt_page_items() to print page items from a + page image (Tomas Vondra) - Add pageinspect - function bt_page_items() to print page items from a - page image (Tomas Vondra) - - - - Previously only block numbers were supported. + Add hash index support to pageinspect (Jesper + Pedersen, Ashutosh Sharma) -- cgit v1.2.3 From 3834abe90c7359319d1909fdb69a40963276a690 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 10 Jul 2017 00:08:19 -0400 Subject: Doc: clarify wording about tool requirements in sourcerepo.sgml. Original wording had confusingly vague antecedent for "they", so replace that with a more repetitive but clearer formulation. In passing, make the link to the installation requirements section more specific. Per gripe from Martin Mai, though this is not the fix he initially proposed. Discussion: https://postgr.es/m/CAN_NWRu-cWuNaiXUjV3m4H-riWURuPW=j21bSaLADs6rjjzXgQ@mail.gmail.com --- doc/src/sgml/sourcerepo.sgml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/sourcerepo.sgml b/doc/src/sgml/sourcerepo.sgml index f8f6bf2de1..dd9da5a7b0 100644 --- a/doc/src/sgml/sourcerepo.sgml +++ b/doc/src/sgml/sourcerepo.sgml @@ -20,9 +20,9 @@ Note that building PostgreSQL from the source repository requires reasonably up-to-date versions of bison, flex, and Perl. These tools are not needed - to build from a distribution tarball since the files they are used to build - are included in the tarball. Other tool requirements are the same as shown - in . + to build from a distribution tarball, because the files that these tools + are used to build are included in the tarball. Other tool requirements + are the same as shown in . -- cgit v1.2.3 From 260ba8525a6365cfb3251d619767cc6ae19ddff8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 10 Jul 2017 00:44:05 -0400 Subject: Doc: remove claim that PROVE_FLAGS defaults to '--verbose'. Commit e9c81b601 changed this, but missed updating the documentation. The adjacent claim that we use TAP tests only in src/bin seems pretty obsolete as well. Minor other copy-editing. --- doc/src/sgml/regress.sgml | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml index 8229528bd4..7c2b1029c2 100644 --- a/doc/src/sgml/regress.sgml +++ b/doc/src/sgml/regress.sgml @@ -207,7 +207,7 @@ make installcheck-world - The TAP tests are only run when PostgreSQL was configured with the + The TAP-based tests are run only when PostgreSQL was configured with the option . This is recommended for development, but can be omitted if there is no suitable Perl installation. @@ -664,20 +664,19 @@ float8:out:i.86-.*-openbsd=float8-small-is-zero.out TAP Tests - The client program tests under src/bin use the Perl - TAP tools and are run by prove. You can pass + Various tests, particularly the client program tests + under src/bin, use the Perl TAP tools and are run + using the Perl testing program prove. You can pass command-line options to prove by setting the make variable PROVE_FLAGS, for example: -make -C src/bin check PROVE_FLAGS='--reverse' +make -C src/bin check PROVE_FLAGS='--timer' - The default is --verbose. See the manual page - of prove for more information. + See the manual page of prove for more information. - The tests written in Perl require the Perl - module IPC::Run. + The TAP tests require the Perl module IPC::Run. This module is available from CPAN or an operating system package. -- cgit v1.2.3 From 7b02ba62e9ffad5b14c24756a0c2aeae839c9d05 Mon Sep 17 00:00:00 2001 From: Heikki Linnakangas Date: Mon, 10 Jul 2017 12:28:57 +0300 Subject: Allow multiple hostaddrs to go with multiple hostnames. Also fix two other issues, while we're at it: * In error message on connection failure, if multiple network addresses were given as the host option, as in "host=127.0.0.1,127.0.0.2", the error message printed the address twice. * If there were many more ports than hostnames, the error message would always claim that there was one port too many, even if there was more than one. For example, if you gave 2 hostnames and 5 ports, the error message claimed that you gave 2 hostnames and 3 ports. Discussion: https://www.postgresql.org/message-id/10badbc6-4d5a-a769-623a-f7ada43e14dd@iki.fi --- doc/src/sgml/libpq.sgml | 55 +++++++- src/interfaces/libpq/fe-connect.c | 258 ++++++++++++++++++++++++-------------- src/interfaces/libpq/libpq-int.h | 3 +- 3 files changed, 219 insertions(+), 97 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index f0167a64bc..124c21bed7 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -887,6 +887,42 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname host will be tried in turn until a connection is successfully established. + + + Specifying Multiple Hosts + + + It is possible to specify multiple hosts to connect to, so that they are + tried in the given order. In the Keyword/Value format, the host, + hostaddr, and port options accept a comma-separated + list of values. The same number of elements must be given in each option, such + that e.g. the first hostaddr corresponds to the first host name, + the second hostaddr corresponds to the second host name, and so + forth. As an exception, if only one port is specified, it + applies to all the hosts. + + + + In the connection URI format, you can list multiple host:port pairs + separated by commas, in the host component of the URI. In either + format, a single hostname can also translate to multiple network addresses. A + common example of this is a host that has both an IPv4 and an IPv6 address. + + + + When multiple hosts are specified, or when a single hostname is + translated to multiple addresses, all the hosts and addresses will be + tried in order, until one succeeds. If none of the hosts can be reached, + the connection fails. If a connection is established successfully, but + authentication fails, the remaining hosts in the list are not tried. + + + + If a password file is used, you can have different passwords for + different hosts. All the other connection options are the same for every + host, it is not possible to e.g. specify a different username for + different hosts. + @@ -900,7 +936,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname host - Comma-separated list of host names.host name + Name of host to connect to.host name If a host name begins with a slash, it specifies Unix-domain communication rather than TCP/IP communication; the value is the name of the directory in which the socket file is stored. If @@ -912,6 +948,11 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname when PostgreSQL was built). On machines without Unix-domain sockets, the default is to connect to localhost. + + A comma-separated list of host names is also accepted, in which case + each host name in the list is tried in order. See + for details. + @@ -965,6 +1006,11 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname ). + + A comma-separated list of hostaddrs is also accepted, in + which case each host in the list is tried in order. See + for details. + Without either a host name or host address, libpq will connect using a @@ -981,9 +1027,10 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Port number to connect to at the server host, or socket file name extension for Unix-domain connections.port - If the host parameter included multiple, comma-separated - hosts, this parameter may specify a list of ports of equal length, - or it may specify a single port number to be used for all hosts. + If multiple hosts were given in the host or + hostaddr parameters, this parameter may specify a list + of ports of equal length, or it may specify a single port number to + be used for all hosts. diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c index 2f7b4060df..e548f3f062 100644 --- a/src/interfaces/libpq/fe-connect.c +++ b/src/interfaces/libpq/fe-connect.c @@ -827,6 +827,62 @@ connectOptions1(PGconn *conn, const char *conninfo) return true; } +/* + * Count the number of elements in a simple comma-separated list. + */ +static int +count_comma_separated_elems(const char *input) +{ + int n; + + n = 1; + for (; *input != '\0'; input++) + { + if (*input == ',') + n++; + } + + return n; +} + +/* + * Parse a simple comma-separated list. + * + * On each call, returns a malloc'd copy of the next element, and sets *more + * to indicate whether there are any more elements in the list after this, + * and updates *startptr to point to the next element, if any. + * + * On out of memory, returns NULL. + */ +static char * +parse_comma_separated_list(char **startptr, bool *more) +{ + char *p; + char *s = *startptr; + char *e; + int len; + + /* + * Search for the end of the current element; a comma or end-of-string + * acts as a terminator. + */ + e = s; + while (*e != '\0' && *e != ',') + ++e; + *more = (*e == ','); + + len = e - s; + p = (char *) malloc(sizeof(char) * (len + 1)); + if (p) + { + memcpy(p, s, len); + p[len] = '\0'; + } + *startptr = e + 1; + + return p; +} + /* * connectOptions2 * @@ -840,21 +896,16 @@ connectOptions2(PGconn *conn) { /* * Allocate memory for details about each host to which we might possibly - * try to connect. If pghostaddr is set, we're only going to try to - * connect to that one particular address. If it's not, we'll use pghost, - * which may contain multiple, comma-separated names. + * try to connect. For that, count the number of elements in the hostaddr + * or host options. If neither is given, assume one host. */ - conn->nconnhost = 1; conn->whichhost = 0; - if ((conn->pghostaddr == NULL || conn->pghostaddr[0] == '\0') - && conn->pghost != NULL) - { - char *s; - - for (s = conn->pghost; *s != '\0'; ++s) - if (*s == ',') - conn->nconnhost++; - } + if (conn->pghostaddr && conn->pghostaddr[0] != '\0') + conn->nconnhost = count_comma_separated_elems(conn->pghostaddr); + else if (conn->pghost && conn->pghost[0] != '\0') + conn->nconnhost = count_comma_separated_elems(conn->pghost); + else + conn->nconnhost = 1; conn->connhost = (pg_conn_host *) calloc(conn->nconnhost, sizeof(pg_conn_host)); if (conn->connhost == NULL) @@ -866,51 +917,67 @@ connectOptions2(PGconn *conn) */ if (conn->pghostaddr != NULL && conn->pghostaddr[0] != '\0') { - conn->connhost[0].host = strdup(conn->pghostaddr); - if (conn->connhost[0].host == NULL) - goto oom_error; - conn->connhost[0].type = CHT_HOST_ADDRESS; + int i; + char *s = conn->pghostaddr; + bool more = true; + + for (i = 0; i < conn->nconnhost && more; i++) + { + conn->connhost[i].hostaddr = parse_comma_separated_list(&s, &more); + if (conn->connhost[i].hostaddr == NULL) + goto oom_error; + + conn->connhost[i].type = CHT_HOST_ADDRESS; + } + + /* + * If hostaddr was given, the array was allocated according to the + * number of elements in the hostaddr list, so it really should be the + * right size. + */ + Assert(!more); + Assert(i == conn->nconnhost); } - else if (conn->pghost != NULL && conn->pghost[0] != '\0') + + if (conn->pghost != NULL && conn->pghost[0] != '\0') { - int i = 0; + int i; char *s = conn->pghost; + bool more = true; - while (1) + for (i = 0; i < conn->nconnhost && more; i++) { - char *e = s; - - /* - * Search for the end of the current hostname; a comma or - * end-of-string acts as a terminator. - */ - while (*e != '\0' && *e != ',') - ++e; - - /* Copy the hostname whose bounds we just identified. */ - conn->connhost[i].host = - (char *) malloc(sizeof(char) * (e - s + 1)); + conn->connhost[i].host = parse_comma_separated_list(&s, &more); if (conn->connhost[i].host == NULL) goto oom_error; - memcpy(conn->connhost[i].host, s, e - s); - conn->connhost[i].host[e - s] = '\0'; /* Identify the type of host. */ - conn->connhost[i].type = CHT_HOST_NAME; + if (conn->pghostaddr == NULL || conn->pghostaddr[0] == '\0') + { + conn->connhost[i].type = CHT_HOST_NAME; #ifdef HAVE_UNIX_SOCKETS - if (is_absolute_path(conn->connhost[i].host)) - conn->connhost[i].type = CHT_UNIX_SOCKET; + if (is_absolute_path(conn->connhost[i].host)) + conn->connhost[i].type = CHT_UNIX_SOCKET; #endif - - /* Prepare to find the next host (if any). */ - if (*e == '\0') - break; - s = e + 1; - i++; + } + } + if (more || i != conn->nconnhost) + { + conn->status = CONNECTION_BAD; + printfPQExpBuffer(&conn->errorMessage, + libpq_gettext("could not match %d host names to %d hostaddrs\n"), + count_comma_separated_elems(conn->pghost), conn->nconnhost); + return false; } } - else + + /* + * If neither host or hostaddr options was given, connect to default host. + */ + if ((conn->pghostaddr == NULL || conn->pghostaddr[0] == '\0') && + (conn->pghost == NULL || conn->pghost[0] == '\0')) { + Assert(conn->nconnhost == 1); #ifdef HAVE_UNIX_SOCKETS conn->connhost[0].host = strdup(DEFAULT_PGSOCKET_DIR); conn->connhost[0].type = CHT_UNIX_SOCKET; @@ -927,54 +994,36 @@ connectOptions2(PGconn *conn) */ if (conn->pgport != NULL && conn->pgport[0] != '\0') { - int i = 0; + int i; char *s = conn->pgport; - int nports = 1; + bool more = true; - for (i = 0; i < conn->nconnhost; ++i) + for (i = 0; i < conn->nconnhost && more; i++) { - char *e = s; - - /* Search for the end of the current port number. */ - while (*e != '\0' && *e != ',') - ++e; + conn->connhost[i].port = parse_comma_separated_list(&s, &more); + if (conn->connhost[i].port == NULL) + goto oom_error; + } - /* - * If we found a port number of non-zero length, copy it. - * Otherwise, insert the default port number. - */ - if (e > s) + /* + * If exactly one port was given, use it for every host. Otherwise, + * there must be exactly as many ports as there were hosts. + */ + if (i == 1 && !more) + { + for (i = 1; i < conn->nconnhost; i++) { - conn->connhost[i].port = - (char *) malloc(sizeof(char) * (e - s + 1)); + conn->connhost[i].port = strdup(conn->connhost[0].port); if (conn->connhost[i].port == NULL) goto oom_error; - memcpy(conn->connhost[i].port, s, e - s); - conn->connhost[i].port[e - s] = '\0'; - } - - /* - * Move on to the next port number, unless there are no more. (If - * only one part number is specified, we reuse it for every host.) - */ - if (*e != '\0') - { - s = e + 1; - ++nports; } } - - /* - * If multiple ports were specified, there must be exactly as many - * ports as there were hosts. Otherwise, we do not know how to match - * them up. - */ - if (nports != 1 && nports != conn->nconnhost) + else if (more || i != conn->nconnhost) { conn->status = CONNECTION_BAD; printfPQExpBuffer(&conn->errorMessage, libpq_gettext("could not match %d port numbers to %d hosts\n"), - nports, conn->nconnhost); + count_comma_separated_elems(conn->pgport), conn->nconnhost); return false; } } @@ -1048,8 +1097,8 @@ connectOptions2(PGconn *conn) char *pwhost = conn->connhost[i].host; if (conn->connhost[i].type == CHT_HOST_ADDRESS && - conn->pghost != NULL && conn->pghost[0] != '\0') - pwhost = conn->pghost; + conn->connhost[i].host != NULL && conn->connhost[i].host != '\0') + pwhost = conn->connhost[i].hostaddr; conn->connhost[i].password = passwordFromFile(pwhost, @@ -1399,8 +1448,8 @@ connectFailureMessage(PGconn *conn, int errorno) * Optionally display the network address with the hostname. This is * useful to distinguish between IPv4 and IPv6 connections. */ - if (conn->pghostaddr != NULL) - strlcpy(host_addr, conn->pghostaddr, NI_MAXHOST); + if (conn->connhost[conn->whichhost].type == CHT_HOST_ADDRESS) + strlcpy(host_addr, conn->connhost[conn->whichhost].hostaddr, NI_MAXHOST); else if (addr->ss_family == AF_INET) { if (inet_net_ntop(AF_INET, @@ -1423,7 +1472,10 @@ connectFailureMessage(PGconn *conn, int errorno) strcpy(host_addr, "???"); /* To which host and port were we actually connecting? */ - displayed_host = conn->connhost[conn->whichhost].host; + if (conn->connhost[conn->whichhost].type == CHT_HOST_ADDRESS) + displayed_host = conn->connhost[conn->whichhost].hostaddr; + else + displayed_host = conn->connhost[conn->whichhost].host; displayed_port = conn->connhost[conn->whichhost].port; if (displayed_port == NULL || displayed_port[0] == '\0') displayed_port = DEF_PGPORT_STR; @@ -1433,8 +1485,8 @@ connectFailureMessage(PGconn *conn, int errorno) * 'host' was missing or does not match our lookup, display the * looked-up IP address. */ - if ((conn->pghostaddr == NULL) && - (conn->pghost == NULL || strcmp(conn->pghost, host_addr) != 0)) + if (conn->connhost[conn->whichhost].type != CHT_HOST_ADDRESS && + strcmp(displayed_host, host_addr) != 0) appendPQExpBuffer(&conn->errorMessage, libpq_gettext("could not connect to server: %s\n" "\tIs the server running on host \"%s\" (%s) and accepting\n" @@ -1659,7 +1711,7 @@ connectDBStart(PGconn *conn) hint.ai_family = AF_UNSPEC; /* Figure out the port number we're going to use. */ - if (ch->port == NULL) + if (ch->port == NULL || ch->port[0] == '\0') thisport = DEF_PGPORT; else { @@ -1689,7 +1741,7 @@ connectDBStart(PGconn *conn) case CHT_HOST_ADDRESS: hint.ai_flags = AI_NUMERICHOST; - ret = pg_getaddrinfo_all(ch->host, portstr, &hint, &ch->addrlist); + ret = pg_getaddrinfo_all(ch->hostaddr, portstr, &hint, &ch->addrlist); if (ret || !ch->addrlist) appendPQExpBuffer(&conn->errorMessage, libpq_gettext("could not parse network address \"%s\": %s\n"), @@ -3041,6 +3093,9 @@ keep_going: /* We will come back to here until there is } case CONNECTION_CHECK_WRITABLE: { + const char *displayed_host; + const char *displayed_port; + if (!saveErrorMessage(conn, &savedMessage)) goto error_return; @@ -3067,6 +3122,17 @@ keep_going: /* We will come back to here until there is val = PQgetvalue(res, 0, 0); if (strncmp(val, "on", 2) == 0) { + const char *displayed_host; + const char *displayed_port; + + if (conn->connhost[conn->whichhost].type == CHT_HOST_ADDRESS) + displayed_host = conn->connhost[conn->whichhost].hostaddr; + else + displayed_host = conn->connhost[conn->whichhost].host; + displayed_port = conn->connhost[conn->whichhost].port; + if (displayed_port == NULL || displayed_port[0] == '\0') + displayed_port = DEF_PGPORT_STR; + PQclear(res); restoreErrorMessage(conn, &savedMessage); @@ -3075,8 +3141,7 @@ keep_going: /* We will come back to here until there is libpq_gettext("could not make a writable " "connection to server " "\"%s:%s\"\n"), - conn->connhost[conn->whichhost].host, - conn->connhost[conn->whichhost].port); + displayed_host, displayed_port); conn->status = CONNECTION_OK; sendTerminateConn(conn); pqDropConnection(conn, true); @@ -3113,11 +3178,18 @@ keep_going: /* We will come back to here until there is if (res) PQclear(res); restoreErrorMessage(conn, &savedMessage); + + if (conn->connhost[conn->whichhost].type == CHT_HOST_ADDRESS) + displayed_host = conn->connhost[conn->whichhost].hostaddr; + else + displayed_host = conn->connhost[conn->whichhost].host; + displayed_port = conn->connhost[conn->whichhost].port; + if (displayed_port == NULL || displayed_port[0] == '\0') + displayed_port = DEF_PGPORT_STR; appendPQExpBuffer(&conn->errorMessage, libpq_gettext("test \"SHOW transaction_read_only\" failed " "on server \"%s:%s\"\n"), - conn->connhost[conn->whichhost].host, - conn->connhost[conn->whichhost].port); + displayed_host, displayed_port); conn->status = CONNECTION_OK; sendTerminateConn(conn); pqDropConnection(conn, true); @@ -3350,6 +3422,8 @@ freePGconn(PGconn *conn) { if (conn->connhost[i].host != NULL) free(conn->connhost[i].host); + if (conn->connhost[i].hostaddr != NULL) + free(conn->connhost[i].hostaddr); if (conn->connhost[i].port != NULL) free(conn->connhost[i].port); if (conn->connhost[i].password != NULL) diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h index ff5020fc0c..42913604e3 100644 --- a/src/interfaces/libpq/libpq-int.h +++ b/src/interfaces/libpq/libpq-int.h @@ -304,8 +304,9 @@ typedef enum pg_conn_host_type */ typedef struct pg_conn_host { - char *host; /* host name or address, or socket path */ pg_conn_host_type type; /* type of host */ + char *host; /* host name or socket path */ + char *hostaddr; /* host address */ char *port; /* port number for this host; if not NULL, * overrides the PGConn's pgport */ char *password; /* password for this host, read from the -- cgit v1.2.3 From d137a6dc239bd32b424826acbb25277ac611ddb1 Mon Sep 17 00:00:00 2001 From: Heikki Linnakangas Date: Mon, 10 Jul 2017 15:36:02 +0300 Subject: Fix missing tag in the docs. Masahiko Sawada Discussion: https://www.postgresql.org/message-id/CAD21AoBCwcTNMdrVWq8T0hoOs2mWSYq9PRJ_fr6SH8HdO+m=0g@mail.gmail.com --- doc/src/sgml/libpq.sgml | 1 + 1 file changed, 1 insertion(+) (limited to 'doc/src') diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 124c21bed7..98b69380b6 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -923,6 +923,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname host, it is not possible to e.g. specify a different username for different hosts. + -- cgit v1.2.3