&slony1; Trigger Handling
trigger handling
In &postgres; version 8.3, new functionality was added where
triggers and rules may have their behaviour altered via ALTER
TABLE, to specify the following alterations:
DISABLE TRIGGER trigger_name
ENABLE TRIGGER trigger_name
ENABLE REPLICA TRIGGER trigger_name
ENABLE ALWAYS TRIGGER trigger_name
DISABLE RULE rewrite_rule_name
ENABLE RULE rewrite_rule_name
ENABLE REPLICA RULE rewrite_rule_name
ENABLE ALWAYS RULE rewrite_rule_name
A new GUC variable, session_replication_role
controls whether the session is in origin,
replica, or local mode, which then, in
combination with the above enabling/disabling options, controls
whether or not the trigger function actually runs.
We may characterize when triggers fire, under &slony1;
replication, based on the following table; the same rules apply to
&postgres; rules.
Trigger Behaviour
Trigger Form When Established Log Trigger denyaccess Trigger Action - origin Action - replica Action - local
DISABLE TRIGGER User request disabled on subscriber enabled on subscriber does not fire does not fire does not fire
ENABLE TRIGGER Default enabled on subscriber disabled on subscriber fires does not fire fires
ENABLE REPLICA TRIGGER User request inappropriate inappropriate does not fire fires does not fire
ENABLE ALWAYS TRIGGER User request inappropriate inappropriate fires fires fires
There are, correspondingly, now, several ways in which &slony1;
interacts with this. Let us outline those times that are interesting:
Before replication is set up,
every database starts out in
origin
status, and, by default, all triggers are of the
ENABLE TRIGGER form, so they all run, as is normal
in a system uninvolved in replication.
When a &slony1; subscription is set up, on the origin
node, both the logtrigger and
denyaccess triggers are added, the former being
enabled, and running, the latter being disabled, so it does not
run.
From a locking perspective, each request will need to briefly take out an
exclusive lock on each table as it attaches these triggers, which is
much the same as has always been the case with &slony1;.
On the subscriber, the subscription process will add
the same triggers, but with the polarities reversed
, to
protect data from accidental corruption on subscribers.
From a locking perspective, again, there is not much difference
from earlier &slony1; behaviour, as the subscription process, due to
running TRUNCATE, copying data, and altering table
schemas, requires extensive exclusive table
locks, and the changes in trigger behaviour do not change those
requirements.
If you restore a backup of a Slony-I node (taken by pg_dump or
any other method), and drop the Slony-I namespace, this now cleanly removes
all Slony-I components, leaving the database, including its schema,
in a pristine
, consistent fashion, ready for whatever use may
be desired.
&rddlchanges; is now performed in quite a different
way: rather than altering each replicated table to take it out
of replicated mode
, &slony1; instead simply shifts into the
local status for the duration of this event.
On the origin, this deactivates the
logtrigger trigger.
On each subscriber, this deactivates the
denyaccess trigger.
At the time of invoking
against the former origin, &slony1; must transform that node into a
subscriber, which requires dropping the lockset
triggers, disabling the logtrigger triggers, and
enabling the denyaccess triggers.
At about the same time, when processing against the new origin, &slony1; must transform
that node into an origin, which requires disabling the formerly active
denyaccess triggers, and enabling the
logtrigger triggers.
From a locking perspective &slony1; will need to take out exclusive
locks to disable and enable the respective
triggers.
Similarly to , transforms a subscriber node into an origin,
which requires disabling the formerly active
denyaccess triggers, and enabling the
logtrigger triggers. The locking implications
are again, much the same, requiring an exclusive lock on each such
table.
TRUNCATE in &postgres; 8.4+
In &postgres; 8.4, triggers were augmented to support the
TRUNCATE event. Thus, one may create a trigger
which runs when one requests TRUNCATE on a table, as
follows:
create trigger "_@CLUSTERNAME@_truncatetrigger"
before truncate on my_table
for each statement
execute procedure @NAMESPACE@.log_truncate(22);
&slony1; supports this on nodes running &postgres; 8.4 and
above, as follows:
Tables have an additional two triggers attached to them:
log_truncate(tab_id)
Running on the origin, this captures
TRUNCATE requests, and stores them in &sllog1; and
&sllog2; so that they are applied at the appropriate point on
subscriber nodes.
truncate_deny()
Running on subscriber nodes, this forbids running
TRUNCATE directly against replicated tables on
these nodes, in much the same way denyAccess()
forbids running INSERT/UPDATE/DELETE directly
against replicated tables.
For each table, the command TRUNCATE TABLE
ONLY my_schema.my_table CASCADE; is submitted.
Various options were considered (see Bugzilla Bug
#134 ), after which CASCADE was concluded
to be the appropriate answer.
If you have a subscriber node where additional tables
have gotten attached via foreign keys to a replicated table, then
running TRUNCATE against that parent table will
also TRUNCATE all the
children.
Of course, it should be observed that this was a
terribly dangerous thing to have done because
deleting data from the parent table would already either:
Lead to deleting data from the child tables, this meaning the addition of TRUNCATE support is really no change at all;
Lead to foreign keys being broken on the subscriber, causing replication to keel over.
(In effect, we're not really worsening things.)
Note that if a request truncates several tables
(e.g. - as where a table has a hierachy of
children), then a request will be logged in &sllog1;/&sllog2; for each
table, and the TRUNCATE CASCADE will
effectively mean that the child tables will be
truncated, first indirectly, then directly. If there is a hierarchy
of 3 tables, t1, t2, and
t3, then t3 will get truncated three
times. It's empty after the first TRUNCATE, so
additional iterations will be cheap.
If mixing &postgres; 8.3 and higher versions within a
cluster:
&postgres; 8.3 nodes will not capture
TRUNCATE requests, neither to log the need to
propagate the TRUNCATE, nor to prevent it, on either origin or
replica.
&postgres; 8.4 nodes do capture
TRUNCATE requests for both
purposes.
If a &postgres; 8.4+ node captures a
TRUNCATE request, it will apply fine against a
subscriber running &postgres; 8.3.