PostgreSQL Source Code git master
inval.c
Go to the documentation of this file.
1/*-------------------------------------------------------------------------
2 *
3 * inval.c
4 * POSTGRES cache invalidation dispatcher code.
5 *
6 * This is subtle stuff, so pay attention:
7 *
8 * When a tuple is updated or deleted, our standard visibility rules
9 * consider that it is *still valid* so long as we are in the same command,
10 * ie, until the next CommandCounterIncrement() or transaction commit.
11 * (See access/heap/heapam_visibility.c, and note that system catalogs are
12 * generally scanned under the most current snapshot available, rather than
13 * the transaction snapshot.) At the command boundary, the old tuple stops
14 * being valid and the new version, if any, becomes valid. Therefore,
15 * we cannot simply flush a tuple from the system caches during heap_update()
16 * or heap_delete(). The tuple is still good at that point; what's more,
17 * even if we did flush it, it might be reloaded into the caches by a later
18 * request in the same command. So the correct behavior is to keep a list
19 * of outdated (updated/deleted) tuples and then do the required cache
20 * flushes at the next command boundary. We must also keep track of
21 * inserted tuples so that we can flush "negative" cache entries that match
22 * the new tuples; again, that mustn't happen until end of command.
23 *
24 * Once we have finished the command, we still need to remember inserted
25 * tuples (including new versions of updated tuples), so that we can flush
26 * them from the caches if we abort the transaction. Similarly, we'd better
27 * be able to flush "negative" cache entries that may have been loaded in
28 * place of deleted tuples, so we still need the deleted ones too.
29 *
30 * If we successfully complete the transaction, we have to broadcast all
31 * these invalidation events to other backends (via the SI message queue)
32 * so that they can flush obsolete entries from their caches. Note we have
33 * to record the transaction commit before sending SI messages, otherwise
34 * the other backends won't see our updated tuples as good.
35 *
36 * When a subtransaction aborts, we can process and discard any events
37 * it has queued. When a subtransaction commits, we just add its events
38 * to the pending lists of the parent transaction.
39 *
40 * In short, we need to remember until xact end every insert or delete
41 * of a tuple that might be in the system caches. Updates are treated as
42 * two events, delete + insert, for simplicity. (If the update doesn't
43 * change the tuple hash value, catcache.c optimizes this into one event.)
44 *
45 * We do not need to register EVERY tuple operation in this way, just those
46 * on tuples in relations that have associated catcaches. We do, however,
47 * have to register every operation on every tuple that *could* be in a
48 * catcache, whether or not it currently is in our cache. Also, if the
49 * tuple is in a relation that has multiple catcaches, we need to register
50 * an invalidation message for each such catcache. catcache.c's
51 * PrepareToInvalidateCacheTuple() routine provides the knowledge of which
52 * catcaches may need invalidation for a given tuple.
53 *
54 * Also, whenever we see an operation on a pg_class, pg_attribute, or
55 * pg_index tuple, we register a relcache flush operation for the relation
56 * described by that tuple (as specified in CacheInvalidateHeapTuple()).
57 * Likewise for pg_constraint tuples for foreign keys on relations.
58 *
59 * We keep the relcache flush requests in lists separate from the catcache
60 * tuple flush requests. This allows us to issue all the pending catcache
61 * flushes before we issue relcache flushes, which saves us from loading
62 * a catcache tuple during relcache load only to flush it again right away.
63 * Also, we avoid queuing multiple relcache flush requests for the same
64 * relation, since a relcache flush is relatively expensive to do.
65 * (XXX is it worth testing likewise for duplicate catcache flush entries?
66 * Probably not.)
67 *
68 * Many subsystems own higher-level caches that depend on relcache and/or
69 * catcache, and they register callbacks here to invalidate their caches.
70 * While building a higher-level cache entry, a backend may receive a
71 * callback for the being-built entry or one of its dependencies. This
72 * implies the new higher-level entry would be born stale, and it might
73 * remain stale for the life of the backend. Many caches do not prevent
74 * that. They rely on DDL for can't-miss catalog changes taking
75 * AccessExclusiveLock on suitable objects. (For a change made with less
76 * locking, backends might never read the change.) The relation cache,
77 * however, needs to reflect changes from CREATE INDEX CONCURRENTLY no later
78 * than the beginning of the next transaction. Hence, when a relevant
79 * invalidation callback arrives during a build, relcache.c reattempts that
80 * build. Caches with similar needs could do likewise.
81 *
82 * If a relcache flush is issued for a system relation that we preload
83 * from the relcache init file, we must also delete the init file so that
84 * it will be rebuilt during the next backend restart. The actual work of
85 * manipulating the init file is in relcache.c, but we keep track of the
86 * need for it here.
87 *
88 * Currently, inval messages are sent without regard for the possibility
89 * that the object described by the catalog tuple might be a session-local
90 * object such as a temporary table. This is because (1) this code has
91 * no practical way to tell the difference, and (2) it is not certain that
92 * other backends don't have catalog cache or even relcache entries for
93 * such tables, anyway; there is nothing that prevents that. It might be
94 * worth trying to avoid sending such inval traffic in the future, if those
95 * problems can be overcome cheaply.
96 *
97 * When making a nontransactional change to a cacheable object, we must
98 * likewise send the invalidation immediately, before ending the change's
99 * critical section. This includes inplace heap updates, relmap, and smgr.
100 *
101 * When wal_level=logical, write invalidations into WAL at each command end to
102 * support the decoding of the in-progress transactions. See
103 * CommandEndInvalidationMessages.
104 *
105 * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
106 * Portions Copyright (c) 1994, Regents of the University of California
107 *
108 * IDENTIFICATION
109 * src/backend/utils/cache/inval.c
110 *
111 *-------------------------------------------------------------------------
112 */
113#include "postgres.h"
114
115#include <limits.h>
116
117#include "access/htup_details.h"
118#include "access/xact.h"
119#include "access/xloginsert.h"
120#include "catalog/catalog.h"
122#include "miscadmin.h"
123#include "storage/procnumber.h"
124#include "storage/sinval.h"
125#include "storage/smgr.h"
126#include "utils/catcache.h"
128#include "utils/inval.h"
129#include "utils/memdebug.h"
130#include "utils/memutils.h"
131#include "utils/rel.h"
132#include "utils/relmapper.h"
133#include "utils/snapmgr.h"
134#include "utils/syscache.h"
135
136
137/*
138 * Pending requests are stored as ready-to-send SharedInvalidationMessages.
139 * We keep the messages themselves in arrays in TopTransactionContext (there
140 * are separate arrays for catcache and relcache messages). For transactional
141 * messages, control information is kept in a chain of TransInvalidationInfo
142 * structs, also allocated in TopTransactionContext. (We could keep a
143 * subtransaction's TransInvalidationInfo in its CurTransactionContext; but
144 * that's more wasteful not less so, since in very many scenarios it'd be the
145 * only allocation in the subtransaction's CurTransactionContext.) For
146 * inplace update messages, control information appears in an
147 * InvalidationInfo, allocated in CurrentMemoryContext.
148 *
149 * We can store the message arrays densely, and yet avoid moving data around
150 * within an array, because within any one subtransaction we need only
151 * distinguish between messages emitted by prior commands and those emitted
152 * by the current command. Once a command completes and we've done local
153 * processing on its messages, we can fold those into the prior-commands
154 * messages just by changing array indexes in the TransInvalidationInfo
155 * struct. Similarly, we need distinguish messages of prior subtransactions
156 * from those of the current subtransaction only until the subtransaction
157 * completes, after which we adjust the array indexes in the parent's
158 * TransInvalidationInfo to include the subtransaction's messages. Inplace
159 * invalidations don't need a concept of command or subtransaction boundaries,
160 * since we send them during the WAL insertion critical section.
161 *
162 * The ordering of the individual messages within a command's or
163 * subtransaction's output is not considered significant, although this
164 * implementation happens to preserve the order in which they were queued.
165 * (Previous versions of this code did not preserve it.)
166 *
167 * For notational convenience, control information is kept in two-element
168 * arrays, the first for catcache messages and the second for relcache
169 * messages.
170 */
171#define CatCacheMsgs 0
172#define RelCacheMsgs 1
173
174/* Pointers to main arrays in TopTransactionContext */
175typedef struct InvalMessageArray
176{
177 SharedInvalidationMessage *msgs; /* palloc'd array (can be expanded) */
178 int maxmsgs; /* current allocated size of array */
180
182
183/* Control information for one logical group of messages */
185{
186 int firstmsg[2]; /* first index in relevant array */
187 int nextmsg[2]; /* last+1 index */
189
190/* Macros to help preserve InvalidationMsgsGroup abstraction */
191#define SetSubGroupToFollow(targetgroup, priorgroup, subgroup) \
192 do { \
193 (targetgroup)->firstmsg[subgroup] = \
194 (targetgroup)->nextmsg[subgroup] = \
195 (priorgroup)->nextmsg[subgroup]; \
196 } while (0)
197
198#define SetGroupToFollow(targetgroup, priorgroup) \
199 do { \
200 SetSubGroupToFollow(targetgroup, priorgroup, CatCacheMsgs); \
201 SetSubGroupToFollow(targetgroup, priorgroup, RelCacheMsgs); \
202 } while (0)
203
204#define NumMessagesInSubGroup(group, subgroup) \
205 ((group)->nextmsg[subgroup] - (group)->firstmsg[subgroup])
206
207#define NumMessagesInGroup(group) \
208 (NumMessagesInSubGroup(group, CatCacheMsgs) + \
209 NumMessagesInSubGroup(group, RelCacheMsgs))
210
211
212/*----------------
213 * Transactional invalidation messages are divided into two groups:
214 * 1) events so far in current command, not yet reflected to caches.
215 * 2) events in previous commands of current transaction; these have
216 * been reflected to local caches, and must be either broadcast to
217 * other backends or rolled back from local cache when we commit
218 * or abort the transaction.
219 * Actually, we need such groups for each level of nested transaction,
220 * so that we can discard events from an aborted subtransaction. When
221 * a subtransaction commits, we append its events to the parent's groups.
222 *
223 * The relcache-file-invalidated flag can just be a simple boolean,
224 * since we only act on it at transaction commit; we don't care which
225 * command of the transaction set it.
226 *----------------
227 */
228
229/* fields common to both transactional and inplace invalidation */
230typedef struct InvalidationInfo
231{
232 /* Events emitted by current command */
234
235 /* init file must be invalidated? */
238
239/* subclass adding fields specific to transactional invalidation */
241{
242 /* Base class */
244
245 /* Events emitted by previous commands of this (sub)transaction */
247
248 /* Back link to parent transaction's info */
250
251 /* Subtransaction nesting depth */
254
256
258
259/* GUC storage */
261
262/*
263 * Dynamically-registered callback functions. Current implementation
264 * assumes there won't be enough of these to justify a dynamically resizable
265 * array; it'd be easy to improve that if needed.
266 *
267 * To avoid searching in CallSyscacheCallbacks, all callbacks for a given
268 * syscache are linked into a list pointed to by syscache_callback_links[id].
269 * The link values are syscache_callback_list[] index plus 1, or 0 for none.
270 */
271
272#define MAX_SYSCACHE_CALLBACKS 64
273#define MAX_RELCACHE_CALLBACKS 10
274#define MAX_RELSYNC_CALLBACKS 10
275
276static struct SYSCACHECALLBACK
277{
278 int16 id; /* cache number */
279 int16 link; /* next callback index+1 for same cache */
283
284static int16 syscache_callback_links[SysCacheSize];
285
287
288static struct RELCACHECALLBACK
289{
293
295
296static struct RELSYNCCALLBACK
297{
301
303
304
305/* ----------------------------------------------------------------
306 * Invalidation subgroup support functions
307 * ----------------------------------------------------------------
308 */
309
310/*
311 * AddInvalidationMessage
312 * Add an invalidation message to a (sub)group.
313 *
314 * The group must be the last active one, since we assume we can add to the
315 * end of the relevant InvalMessageArray.
316 *
317 * subgroup must be CatCacheMsgs or RelCacheMsgs.
318 */
319static void
321 const SharedInvalidationMessage *msg)
322{
323 InvalMessageArray *ima = &InvalMessageArrays[subgroup];
324 int nextindex = group->nextmsg[subgroup];
325
326 if (nextindex >= ima->maxmsgs)
327 {
328 if (ima->msgs == NULL)
329 {
330 /* Create new storage array in TopTransactionContext */
331 int reqsize = 32; /* arbitrary */
332
335 reqsize * sizeof(SharedInvalidationMessage));
336 ima->maxmsgs = reqsize;
337 Assert(nextindex == 0);
338 }
339 else
340 {
341 /* Enlarge storage array */
342 int reqsize = 2 * ima->maxmsgs;
343
345 repalloc(ima->msgs,
346 reqsize * sizeof(SharedInvalidationMessage));
347 ima->maxmsgs = reqsize;
348 }
349 }
350 /* Okay, add message to current group */
351 ima->msgs[nextindex] = *msg;
352 group->nextmsg[subgroup]++;
353}
354
355/*
356 * Append one subgroup of invalidation messages to another, resetting
357 * the source subgroup to empty.
358 */
359static void
362 int subgroup)
363{
364 /* Messages must be adjacent in main array */
365 Assert(dest->nextmsg[subgroup] == src->firstmsg[subgroup]);
366
367 /* ... which makes this easy: */
368 dest->nextmsg[subgroup] = src->nextmsg[subgroup];
369
370 /*
371 * This is handy for some callers and irrelevant for others. But we do it
372 * always, reasoning that it's bad to leave different groups pointing at
373 * the same fragment of the message array.
374 */
375 SetSubGroupToFollow(src, dest, subgroup);
376}
377
378/*
379 * Process a subgroup of invalidation messages.
380 *
381 * This is a macro that executes the given code fragment for each message in
382 * a message subgroup. The fragment should refer to the message as *msg.
383 */
384#define ProcessMessageSubGroup(group, subgroup, codeFragment) \
385 do { \
386 int _msgindex = (group)->firstmsg[subgroup]; \
387 int _endmsg = (group)->nextmsg[subgroup]; \
388 for (; _msgindex < _endmsg; _msgindex++) \
389 { \
390 SharedInvalidationMessage *msg = \
391 &InvalMessageArrays[subgroup].msgs[_msgindex]; \
392 codeFragment; \
393 } \
394 } while (0)
395
396/*
397 * Process a subgroup of invalidation messages as an array.
398 *
399 * As above, but the code fragment can handle an array of messages.
400 * The fragment should refer to the messages as msgs[], with n entries.
401 */
402#define ProcessMessageSubGroupMulti(group, subgroup, codeFragment) \
403 do { \
404 int n = NumMessagesInSubGroup(group, subgroup); \
405 if (n > 0) { \
406 SharedInvalidationMessage *msgs = \
407 &InvalMessageArrays[subgroup].msgs[(group)->firstmsg[subgroup]]; \
408 codeFragment; \
409 } \
410 } while (0)
411
412
413/* ----------------------------------------------------------------
414 * Invalidation group support functions
415 *
416 * These routines understand about the division of a logical invalidation
417 * group into separate physical arrays for catcache and relcache entries.
418 * ----------------------------------------------------------------
419 */
420
421/*
422 * Add a catcache inval entry
423 */
424static void
426 int id, uint32 hashValue, Oid dbId)
427{
429
430 Assert(id < CHAR_MAX);
431 msg.cc.id = (int8) id;
432 msg.cc.dbId = dbId;
433 msg.cc.hashValue = hashValue;
434
435 /*
436 * Define padding bytes in SharedInvalidationMessage structs to be
437 * defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by
438 * multiple processes, will cause spurious valgrind warnings about
439 * undefined memory being used. That's because valgrind remembers the
440 * undefined bytes from the last local process's store, not realizing that
441 * another process has written since, filling the previously uninitialized
442 * bytes
443 */
444 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
445
447}
448
449/*
450 * Add a whole-catalog inval entry
451 */
452static void
454 Oid dbId, Oid catId)
455{
457
459 msg.cat.dbId = dbId;
460 msg.cat.catId = catId;
461 /* check AddCatcacheInvalidationMessage() for an explanation */
462 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
463
465}
466
467/*
468 * Add a relcache inval entry
469 */
470static void
472 Oid dbId, Oid relId)
473{
475
476 /*
477 * Don't add a duplicate item. We assume dbId need not be checked because
478 * it will never change. InvalidOid for relId means all relations so we
479 * don't need to add individual ones when it is present.
480 */
482 if (msg->rc.id == SHAREDINVALRELCACHE_ID &&
483 (msg->rc.relId == relId ||
484 msg->rc.relId == InvalidOid))
485 return);
486
487 /* OK, add the item */
489 msg.rc.dbId = dbId;
490 msg.rc.relId = relId;
491 /* check AddCatcacheInvalidationMessage() for an explanation */
492 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
493
495}
496
497/*
498 * Add a relsync inval entry
499 *
500 * We put these into the relcache subgroup for simplicity. This message is the
501 * same as AddRelcacheInvalidationMessage() except that it is for
502 * RelationSyncCache maintained by decoding plugin pgoutput.
503 */
504static void
506 Oid dbId, Oid relId)
507{
509
510 /* Don't add a duplicate item. */
512 if (msg->rc.id == SHAREDINVALRELSYNC_ID &&
513 (msg->rc.relId == relId ||
514 msg->rc.relId == InvalidOid))
515 return);
516
517 /* OK, add the item */
519 msg.rc.dbId = dbId;
520 msg.rc.relId = relId;
521 /* check AddCatcacheInvalidationMessage() for an explanation */
522 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
523
525}
526
527/*
528 * Add a snapshot inval entry
529 *
530 * We put these into the relcache subgroup for simplicity.
531 */
532static void
534 Oid dbId, Oid relId)
535{
537
538 /* Don't add a duplicate item */
539 /* We assume dbId need not be checked because it will never change */
541 if (msg->sn.id == SHAREDINVALSNAPSHOT_ID &&
542 msg->sn.relId == relId)
543 return);
544
545 /* OK, add the item */
547 msg.sn.dbId = dbId;
548 msg.sn.relId = relId;
549 /* check AddCatcacheInvalidationMessage() for an explanation */
550 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
551
553}
554
555/*
556 * Append one group of invalidation messages to another, resetting
557 * the source group to empty.
558 */
559static void
562{
565}
566
567/*
568 * Execute the given function for all the messages in an invalidation group.
569 * The group is not altered.
570 *
571 * catcache entries are processed first, for reasons mentioned above.
572 */
573static void
575 void (*func) (SharedInvalidationMessage *msg))
576{
577 ProcessMessageSubGroup(group, CatCacheMsgs, func(msg));
578 ProcessMessageSubGroup(group, RelCacheMsgs, func(msg));
579}
580
581/*
582 * As above, but the function is able to process an array of messages
583 * rather than just one at a time.
584 */
585static void
587 void (*func) (const SharedInvalidationMessage *msgs, int n))
588{
589 ProcessMessageSubGroupMulti(group, CatCacheMsgs, func(msgs, n));
590 ProcessMessageSubGroupMulti(group, RelCacheMsgs, func(msgs, n));
591}
592
593/* ----------------------------------------------------------------
594 * private support functions
595 * ----------------------------------------------------------------
596 */
597
598/*
599 * RegisterCatcacheInvalidation
600 *
601 * Register an invalidation event for a catcache tuple entry.
602 */
603static void
605 uint32 hashValue,
606 Oid dbId,
607 void *context)
608{
609 InvalidationInfo *info = (InvalidationInfo *) context;
610
612 cacheId, hashValue, dbId);
613}
614
615/*
616 * RegisterCatalogInvalidation
617 *
618 * Register an invalidation event for all catcache entries from a catalog.
619 */
620static void
622{
624}
625
626/*
627 * RegisterRelcacheInvalidation
628 *
629 * As above, but register a relcache invalidation event.
630 */
631static void
633{
635
636 /*
637 * Most of the time, relcache invalidation is associated with system
638 * catalog updates, but there are a few cases where it isn't. Quick hack
639 * to ensure that the next CommandCounterIncrement() will think that we
640 * need to do CommandEndInvalidationMessages().
641 */
642 (void) GetCurrentCommandId(true);
643
644 /*
645 * If the relation being invalidated is one of those cached in a relcache
646 * init file, mark that we need to zap that file at commit. For simplicity
647 * invalidations for a specific database always invalidate the shared file
648 * as well. Also zap when we are invalidating whole relcache.
649 */
650 if (relId == InvalidOid || RelationIdIsInInitFile(relId))
651 info->RelcacheInitFileInval = true;
652}
653
654/*
655 * RegisterRelsyncInvalidation
656 *
657 * As above, but register a relsynccache invalidation event.
658 */
659static void
661{
663}
664
665/*
666 * RegisterSnapshotInvalidation
667 *
668 * Register an invalidation event for MVCC scans against a given catalog.
669 * Only needed for catalogs that don't have catcaches.
670 */
671static void
673{
675}
676
677/*
678 * PrepareInvalidationState
679 * Initialize inval data for the current (sub)transaction.
680 */
681static InvalidationInfo *
683{
684 TransInvalidationInfo *myInfo;
685
686 /* PrepareToInvalidateCacheTuple() needs relcache */
688 /* Can't queue transactional message while collecting inplace messages. */
689 Assert(inplaceInvalInfo == NULL);
690
691 if (transInvalInfo != NULL &&
694
695 myInfo = (TransInvalidationInfo *)
697 sizeof(TransInvalidationInfo));
698 myInfo->parent = transInvalInfo;
700
701 /* Now, do we have a previous stack entry? */
702 if (transInvalInfo != NULL)
703 {
704 /* Yes; this one should be for a deeper nesting level. */
706
707 /*
708 * The parent (sub)transaction must not have any current (i.e.,
709 * not-yet-locally-processed) messages. If it did, we'd have a
710 * semantic problem: the new subtransaction presumably ought not be
711 * able to see those events yet, but since the CommandCounter is
712 * linear, that can't work once the subtransaction advances the
713 * counter. This is a convenient place to check for that, as well as
714 * being important to keep management of the message arrays simple.
715 */
717 elog(ERROR, "cannot start a subtransaction when there are unprocessed inval messages");
718
719 /*
720 * MemoryContextAllocZero set firstmsg = nextmsg = 0 in each group,
721 * which is fine for the first (sub)transaction, but otherwise we need
722 * to update them to follow whatever is already in the arrays.
723 */
727 &myInfo->PriorCmdInvalidMsgs);
728 }
729 else
730 {
731 /*
732 * Here, we need only clear any array pointers left over from a prior
733 * transaction.
734 */
739 }
740
741 transInvalInfo = myInfo;
742 return (InvalidationInfo *) myInfo;
743}
744
745/*
746 * PrepareInplaceInvalidationState
747 * Initialize inval data for an inplace update.
748 *
749 * See previous function for more background.
750 */
751static InvalidationInfo *
753{
754 InvalidationInfo *myInfo;
755
757 /* limit of one inplace update under assembly */
758 Assert(inplaceInvalInfo == NULL);
759
760 /* gone after WAL insertion CritSection ends, so use current context */
761 myInfo = (InvalidationInfo *) palloc0(sizeof(InvalidationInfo));
762
763 /* Stash our messages past end of the transactional messages, if any. */
764 if (transInvalInfo != NULL)
767 else
768 {
773 }
774
775 inplaceInvalInfo = myInfo;
776 return myInfo;
777}
778
779/* ----------------------------------------------------------------
780 * public functions
781 * ----------------------------------------------------------------
782 */
783
784void
786{
787 int i;
788
790 ResetCatalogCachesExt(debug_discard);
791 RelationCacheInvalidate(debug_discard); /* gets smgr and relmap too */
792
793 for (i = 0; i < syscache_callback_count; i++)
794 {
795 struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
796
797 ccitem->function(ccitem->arg, ccitem->id, 0);
798 }
799
800 for (i = 0; i < relcache_callback_count; i++)
801 {
802 struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
803
804 ccitem->function(ccitem->arg, InvalidOid);
805 }
806
807 for (i = 0; i < relsync_callback_count; i++)
808 {
809 struct RELSYNCCALLBACK *ccitem = relsync_callback_list + i;
810
811 ccitem->function(ccitem->arg, InvalidOid);
812 }
813}
814
815/*
816 * LocalExecuteInvalidationMessage
817 *
818 * Process a single invalidation message (which could be of any type).
819 * Only the local caches are flushed; this does not transmit the message
820 * to other backends.
821 */
822void
824{
825 if (msg->id >= 0)
826 {
827 if (msg->cc.dbId == MyDatabaseId || msg->cc.dbId == InvalidOid)
828 {
830
831 SysCacheInvalidate(msg->cc.id, msg->cc.hashValue);
832
834 }
835 }
836 else if (msg->id == SHAREDINVALCATALOG_ID)
837 {
838 if (msg->cat.dbId == MyDatabaseId || msg->cat.dbId == InvalidOid)
839 {
841
843
844 /* CatalogCacheFlushCatalog calls CallSyscacheCallbacks as needed */
845 }
846 }
847 else if (msg->id == SHAREDINVALRELCACHE_ID)
848 {
849 if (msg->rc.dbId == MyDatabaseId || msg->rc.dbId == InvalidOid)
850 {
851 int i;
852
853 if (msg->rc.relId == InvalidOid)
855 else
857
858 for (i = 0; i < relcache_callback_count; i++)
859 {
860 struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
861
862 ccitem->function(ccitem->arg, msg->rc.relId);
863 }
864 }
865 }
866 else if (msg->id == SHAREDINVALSMGR_ID)
867 {
868 /*
869 * We could have smgr entries for relations of other databases, so no
870 * short-circuit test is possible here.
871 */
872 RelFileLocatorBackend rlocator;
873
874 rlocator.locator = msg->sm.rlocator;
875 rlocator.backend = (msg->sm.backend_hi << 16) | (int) msg->sm.backend_lo;
876 smgrreleaserellocator(rlocator);
877 }
878 else if (msg->id == SHAREDINVALRELMAP_ID)
879 {
880 /* We only care about our own database and shared catalogs */
881 if (msg->rm.dbId == InvalidOid)
883 else if (msg->rm.dbId == MyDatabaseId)
885 }
886 else if (msg->id == SHAREDINVALSNAPSHOT_ID)
887 {
888 /* We only care about our own database and shared catalogs */
889 if (msg->sn.dbId == InvalidOid)
891 else if (msg->sn.dbId == MyDatabaseId)
893 }
894 else if (msg->id == SHAREDINVALRELSYNC_ID)
895 {
896 /* We only care about our own database */
897 if (msg->rs.dbId == MyDatabaseId)
899 }
900 else
901 elog(FATAL, "unrecognized SI message ID: %d", msg->id);
902}
903
904/*
905 * InvalidateSystemCaches
906 *
907 * This blows away all tuples in the system catalog caches and
908 * all the cached relation descriptors and smgr cache entries.
909 * Relation descriptors that have positive refcounts are then rebuilt.
910 *
911 * We call this when we see a shared-inval-queue overflow signal,
912 * since that tells us we've lost some shared-inval messages and hence
913 * don't know what needs to be invalidated.
914 */
915void
917{
919}
920
921/*
922 * AcceptInvalidationMessages
923 * Read and process invalidation messages from the shared invalidation
924 * message queue.
925 *
926 * Note:
927 * This should be called as the first step in processing a transaction.
928 */
929void
931{
932#ifdef USE_ASSERT_CHECKING
933 /* message handlers shall access catalogs only during transactions */
934 if (IsTransactionState())
936#endif
937
940
941 /*----------
942 * Test code to force cache flushes anytime a flush could happen.
943 *
944 * This helps detect intermittent faults caused by code that reads a cache
945 * entry and then performs an action that could invalidate the entry, but
946 * rarely actually does so. This can spot issues that would otherwise
947 * only arise with badly timed concurrent DDL, for example.
948 *
949 * The default debug_discard_caches = 0 does no forced cache flushes.
950 *
951 * If used with CLOBBER_FREED_MEMORY,
952 * debug_discard_caches = 1 (formerly known as CLOBBER_CACHE_ALWAYS)
953 * provides a fairly thorough test that the system contains no cache-flush
954 * hazards. However, it also makes the system unbelievably slow --- the
955 * regression tests take about 100 times longer than normal.
956 *
957 * If you're a glutton for punishment, try
958 * debug_discard_caches = 3 (formerly known as CLOBBER_CACHE_RECURSIVELY).
959 * This slows things by at least a factor of 10000, so I wouldn't suggest
960 * trying to run the entire regression tests that way. It's useful to try
961 * a few simple tests, to make sure that cache reload isn't subject to
962 * internal cache-flush hazards, but after you've done a few thousand
963 * recursive reloads it's unlikely you'll learn more.
964 *----------
965 */
966#ifdef DISCARD_CACHES_ENABLED
967 {
968 static int recursion_depth = 0;
969
971 {
975 }
976 }
977#endif
978}
979
980/*
981 * PostPrepare_Inval
982 * Clean up after successful PREPARE.
983 *
984 * Here, we want to act as though the transaction aborted, so that we will
985 * undo any syscache changes it made, thereby bringing us into sync with the
986 * outside world, which doesn't believe the transaction committed yet.
987 *
988 * If the prepared transaction is later aborted, there is nothing more to
989 * do; if it commits, we will receive the consequent inval messages just
990 * like everyone else.
991 */
992void
994{
995 AtEOXact_Inval(false);
996}
997
998/*
999 * xactGetCommittedInvalidationMessages() is called by
1000 * RecordTransactionCommit() to collect invalidation messages to add to the
1001 * commit record. This applies only to commit message types, never to
1002 * abort records. Must always run before AtEOXact_Inval(), since that
1003 * removes the data we need to see.
1004 *
1005 * Remember that this runs before we have officially committed, so we
1006 * must not do anything here to change what might occur *if* we should
1007 * fail between here and the actual commit.
1008 *
1009 * see also xact_redo_commit() and xact_desc_commit()
1010 */
1011int
1013 bool *RelcacheInitFileInval)
1014{
1015 SharedInvalidationMessage *msgarray;
1016 int nummsgs;
1017 int nmsgs;
1018
1019 /* Quick exit if we haven't done anything with invalidation messages. */
1020 if (transInvalInfo == NULL)
1021 {
1022 *RelcacheInitFileInval = false;
1023 *msgs = NULL;
1024 return 0;
1025 }
1026
1027 /* Must be at top of stack */
1029
1030 /*
1031 * Relcache init file invalidation requires processing both before and
1032 * after we send the SI messages. However, we need not do anything unless
1033 * we committed.
1034 */
1035 *RelcacheInitFileInval = transInvalInfo->ii.RelcacheInitFileInval;
1036
1037 /*
1038 * Collect all the pending messages into a single contiguous array of
1039 * invalidation messages, to simplify what needs to happen while building
1040 * the commit WAL message. Maintain the order that they would be
1041 * processed in by AtEOXact_Inval(), to ensure emulated behaviour in redo
1042 * is as similar as possible to original. We want the same bugs, if any,
1043 * not new ones.
1044 */
1047
1048 *msgs = msgarray = (SharedInvalidationMessage *)
1050 nummsgs * sizeof(SharedInvalidationMessage));
1051
1052 nmsgs = 0;
1055 (memcpy(msgarray + nmsgs,
1056 msgs,
1057 n * sizeof(SharedInvalidationMessage)),
1058 nmsgs += n));
1061 (memcpy(msgarray + nmsgs,
1062 msgs,
1063 n * sizeof(SharedInvalidationMessage)),
1064 nmsgs += n));
1067 (memcpy(msgarray + nmsgs,
1068 msgs,
1069 n * sizeof(SharedInvalidationMessage)),
1070 nmsgs += n));
1073 (memcpy(msgarray + nmsgs,
1074 msgs,
1075 n * sizeof(SharedInvalidationMessage)),
1076 nmsgs += n));
1077 Assert(nmsgs == nummsgs);
1078
1079 return nmsgs;
1080}
1081
1082/*
1083 * inplaceGetInvalidationMessages() is called by the inplace update to collect
1084 * invalidation messages to add to its WAL record. Like the previous
1085 * function, we might still fail.
1086 */
1087int
1089 bool *RelcacheInitFileInval)
1090{
1091 SharedInvalidationMessage *msgarray;
1092 int nummsgs;
1093 int nmsgs;
1094
1095 /* Quick exit if we haven't done anything with invalidation messages. */
1096 if (inplaceInvalInfo == NULL)
1097 {
1098 *RelcacheInitFileInval = false;
1099 *msgs = NULL;
1100 return 0;
1101 }
1102
1103 *RelcacheInitFileInval = inplaceInvalInfo->RelcacheInitFileInval;
1105 *msgs = msgarray = (SharedInvalidationMessage *)
1106 palloc(nummsgs * sizeof(SharedInvalidationMessage));
1107
1108 nmsgs = 0;
1111 (memcpy(msgarray + nmsgs,
1112 msgs,
1113 n * sizeof(SharedInvalidationMessage)),
1114 nmsgs += n));
1117 (memcpy(msgarray + nmsgs,
1118 msgs,
1119 n * sizeof(SharedInvalidationMessage)),
1120 nmsgs += n));
1121 Assert(nmsgs == nummsgs);
1122
1123 return nmsgs;
1124}
1125
1126/*
1127 * ProcessCommittedInvalidationMessages is executed by xact_redo_commit() or
1128 * standby_redo() to process invalidation messages. Currently that happens
1129 * only at end-of-xact.
1130 *
1131 * Relcache init file invalidation requires processing both
1132 * before and after we send the SI messages. See AtEOXact_Inval()
1133 */
1134void
1136 int nmsgs, bool RelcacheInitFileInval,
1137 Oid dbid, Oid tsid)
1138{
1139 if (nmsgs <= 0)
1140 return;
1141
1142 elog(DEBUG4, "replaying commit with %d messages%s", nmsgs,
1143 (RelcacheInitFileInval ? " and relcache file invalidation" : ""));
1144
1145 if (RelcacheInitFileInval)
1146 {
1147 elog(DEBUG4, "removing relcache init files for database %u", dbid);
1148
1149 /*
1150 * RelationCacheInitFilePreInvalidate, when the invalidation message
1151 * is for a specific database, requires DatabasePath to be set, but we
1152 * should not use SetDatabasePath during recovery, since it is
1153 * intended to be used only once by normal backends. Hence, a quick
1154 * hack: set DatabasePath directly then unset after use.
1155 */
1156 if (OidIsValid(dbid))
1157 DatabasePath = GetDatabasePath(dbid, tsid);
1158
1160
1161 if (OidIsValid(dbid))
1162 {
1164 DatabasePath = NULL;
1165 }
1166 }
1167
1168 SendSharedInvalidMessages(msgs, nmsgs);
1169
1170 if (RelcacheInitFileInval)
1172}
1173
1174/*
1175 * AtEOXact_Inval
1176 * Process queued-up invalidation messages at end of main transaction.
1177 *
1178 * If isCommit, we must send out the messages in our PriorCmdInvalidMsgs list
1179 * to the shared invalidation message queue. Note that these will be read
1180 * not only by other backends, but also by our own backend at the next
1181 * transaction start (via AcceptInvalidationMessages). This means that
1182 * we can skip immediate local processing of anything that's still in
1183 * CurrentCmdInvalidMsgs, and just send that list out too.
1184 *
1185 * If not isCommit, we are aborting, and must locally process the messages
1186 * in PriorCmdInvalidMsgs. No messages need be sent to other backends,
1187 * since they'll not have seen our changed tuples anyway. We can forget
1188 * about CurrentCmdInvalidMsgs too, since those changes haven't touched
1189 * the caches yet.
1190 *
1191 * In any case, reset our state to empty. We need not physically
1192 * free memory here, since TopTransactionContext is about to be emptied
1193 * anyway.
1194 *
1195 * Note:
1196 * This should be called as the last step in processing a transaction.
1197 */
1198void
1199AtEOXact_Inval(bool isCommit)
1200{
1201 inplaceInvalInfo = NULL;
1202
1203 /* Quick exit if no transactional messages */
1204 if (transInvalInfo == NULL)
1205 return;
1206
1207 /* Must be at top of stack */
1209
1210 INJECTION_POINT("transaction-end-process-inval");
1211
1212 if (isCommit)
1213 {
1214 /*
1215 * Relcache init file invalidation requires processing both before and
1216 * after we send the SI messages. However, we need not do anything
1217 * unless we committed.
1218 */
1221
1224
1227
1230 }
1231 else
1232 {
1235 }
1236
1237 /* Need not free anything explicitly */
1238 transInvalInfo = NULL;
1239}
1240
1241/*
1242 * PreInplace_Inval
1243 * Process queued-up invalidation before inplace update critical section.
1244 *
1245 * Tasks belong here if they are safe even if the inplace update does not
1246 * complete. Currently, this just unlinks a cache file, which can fail. The
1247 * sum of this and AtInplace_Inval() mirrors AtEOXact_Inval(isCommit=true).
1248 */
1249void
1251{
1253
1256}
1257
1258/*
1259 * AtInplace_Inval
1260 * Process queued-up invalidations after inplace update buffer mutation.
1261 */
1262void
1264{
1266
1267 if (inplaceInvalInfo == NULL)
1268 return;
1269
1272
1275
1276 inplaceInvalInfo = NULL;
1277}
1278
1279/*
1280 * ForgetInplace_Inval
1281 * Alternative to PreInplace_Inval()+AtInplace_Inval(): discard queued-up
1282 * invalidations. This lets inplace update enumerate invalidations
1283 * optimistically, before locking the buffer.
1284 */
1285void
1287{
1288 inplaceInvalInfo = NULL;
1289}
1290
1291/*
1292 * AtEOSubXact_Inval
1293 * Process queued-up invalidation messages at end of subtransaction.
1294 *
1295 * If isCommit, process CurrentCmdInvalidMsgs if any (there probably aren't),
1296 * and then attach both CurrentCmdInvalidMsgs and PriorCmdInvalidMsgs to the
1297 * parent's PriorCmdInvalidMsgs list.
1298 *
1299 * If not isCommit, we are aborting, and must locally process the messages
1300 * in PriorCmdInvalidMsgs. No messages need be sent to other backends.
1301 * We can forget about CurrentCmdInvalidMsgs too, since those changes haven't
1302 * touched the caches yet.
1303 *
1304 * In any case, pop the transaction stack. We need not physically free memory
1305 * here, since CurTransactionContext is about to be emptied anyway
1306 * (if aborting). Beware of the possibility of aborting the same nesting
1307 * level twice, though.
1308 */
1309void
1310AtEOSubXact_Inval(bool isCommit)
1311{
1312 int my_level;
1313 TransInvalidationInfo *myInfo;
1314
1315 /*
1316 * Successful inplace update must clear this, but we clear it on abort.
1317 * Inplace updates allocate this in CurrentMemoryContext, which has
1318 * lifespan <= subtransaction lifespan. Hence, don't free it explicitly.
1319 */
1320 if (isCommit)
1321 Assert(inplaceInvalInfo == NULL);
1322 else
1323 inplaceInvalInfo = NULL;
1324
1325 /* Quick exit if no transactional messages. */
1326 myInfo = transInvalInfo;
1327 if (myInfo == NULL)
1328 return;
1329
1330 /* Also bail out quickly if messages are not for this level. */
1331 my_level = GetCurrentTransactionNestLevel();
1332 if (myInfo->my_level != my_level)
1333 {
1334 Assert(myInfo->my_level < my_level);
1335 return;
1336 }
1337
1338 if (isCommit)
1339 {
1340 /* If CurrentCmdInvalidMsgs still has anything, fix it */
1342
1343 /*
1344 * We create invalidation stack entries lazily, so the parent might
1345 * not have one. Instead of creating one, moving all the data over,
1346 * and then freeing our own, we can just adjust the level of our own
1347 * entry.
1348 */
1349 if (myInfo->parent == NULL || myInfo->parent->my_level < my_level - 1)
1350 {
1351 myInfo->my_level--;
1352 return;
1353 }
1354
1355 /*
1356 * Pass up my inval messages to parent. Notice that we stick them in
1357 * PriorCmdInvalidMsgs, not CurrentCmdInvalidMsgs, since they've
1358 * already been locally processed. (This would trigger the Assert in
1359 * AppendInvalidationMessageSubGroup if the parent's
1360 * CurrentCmdInvalidMsgs isn't empty; but we already checked that in
1361 * PrepareInvalidationState.)
1362 */
1364 &myInfo->PriorCmdInvalidMsgs);
1365
1366 /* Must readjust parent's CurrentCmdInvalidMsgs indexes now */
1368 &myInfo->parent->PriorCmdInvalidMsgs);
1369
1370 /* Pending relcache inval becomes parent's problem too */
1371 if (myInfo->ii.RelcacheInitFileInval)
1372 myInfo->parent->ii.RelcacheInitFileInval = true;
1373
1374 /* Pop the transaction state stack */
1375 transInvalInfo = myInfo->parent;
1376
1377 /* Need not free anything else explicitly */
1378 pfree(myInfo);
1379 }
1380 else
1381 {
1384
1385 /* Pop the transaction state stack */
1386 transInvalInfo = myInfo->parent;
1387
1388 /* Need not free anything else explicitly */
1389 pfree(myInfo);
1390 }
1391}
1392
1393/*
1394 * CommandEndInvalidationMessages
1395 * Process queued-up invalidation messages at end of one command
1396 * in a transaction.
1397 *
1398 * Here, we send no messages to the shared queue, since we don't know yet if
1399 * we will commit. We do need to locally process the CurrentCmdInvalidMsgs
1400 * list, so as to flush our caches of any entries we have outdated in the
1401 * current command. We then move the current-cmd list over to become part
1402 * of the prior-cmds list.
1403 *
1404 * Note:
1405 * This should be called during CommandCounterIncrement(),
1406 * after we have advanced the command ID.
1407 */
1408void
1410{
1411 /*
1412 * You might think this shouldn't be called outside any transaction, but
1413 * bootstrap does it, and also ABORT issued when not in a transaction. So
1414 * just quietly return if no state to work on.
1415 */
1416 if (transInvalInfo == NULL)
1417 return;
1418
1421
1422 /* WAL Log per-command invalidation messages for wal_level=logical */
1425
1428}
1429
1430
1431/*
1432 * CacheInvalidateHeapTupleCommon
1433 * Common logic for end-of-command and inplace variants.
1434 */
1435static void
1437 HeapTuple tuple,
1438 HeapTuple newtuple,
1439 InvalidationInfo *(*prepare_callback) (void))
1440{
1441 InvalidationInfo *info;
1442 Oid tupleRelId;
1443 Oid databaseId;
1444 Oid relationId;
1445
1446 /* PrepareToInvalidateCacheTuple() needs relcache */
1448
1449 /* Do nothing during bootstrap */
1451 return;
1452
1453 /*
1454 * We only need to worry about invalidation for tuples that are in system
1455 * catalogs; user-relation tuples are never in catcaches and can't affect
1456 * the relcache either.
1457 */
1458 if (!IsCatalogRelation(relation))
1459 return;
1460
1461 /*
1462 * IsCatalogRelation() will return true for TOAST tables of system
1463 * catalogs, but we don't care about those, either.
1464 */
1465 if (IsToastRelation(relation))
1466 return;
1467
1468 /* Allocate any required resources. */
1469 info = prepare_callback();
1470
1471 /*
1472 * First let the catcache do its thing
1473 */
1474 tupleRelId = RelationGetRelid(relation);
1475 if (RelationInvalidatesSnapshotsOnly(tupleRelId))
1476 {
1477 databaseId = IsSharedRelation(tupleRelId) ? InvalidOid : MyDatabaseId;
1478 RegisterSnapshotInvalidation(info, databaseId, tupleRelId);
1479 }
1480 else
1481 PrepareToInvalidateCacheTuple(relation, tuple, newtuple,
1483 (void *) info);
1484
1485 /*
1486 * Now, is this tuple one of the primary definers of a relcache entry? See
1487 * comments in file header for deeper explanation.
1488 *
1489 * Note we ignore newtuple here; we assume an update cannot move a tuple
1490 * from being part of one relcache entry to being part of another.
1491 */
1492 if (tupleRelId == RelationRelationId)
1493 {
1494 Form_pg_class classtup = (Form_pg_class) GETSTRUCT(tuple);
1495
1496 relationId = classtup->oid;
1497 if (classtup->relisshared)
1498 databaseId = InvalidOid;
1499 else
1500 databaseId = MyDatabaseId;
1501 }
1502 else if (tupleRelId == AttributeRelationId)
1503 {
1505
1506 relationId = atttup->attrelid;
1507
1508 /*
1509 * KLUGE ALERT: we always send the relcache event with MyDatabaseId,
1510 * even if the rel in question is shared (which we can't easily tell).
1511 * This essentially means that only backends in this same database
1512 * will react to the relcache flush request. This is in fact
1513 * appropriate, since only those backends could see our pg_attribute
1514 * change anyway. It looks a bit ugly though. (In practice, shared
1515 * relations can't have schema changes after bootstrap, so we should
1516 * never come here for a shared rel anyway.)
1517 */
1518 databaseId = MyDatabaseId;
1519 }
1520 else if (tupleRelId == IndexRelationId)
1521 {
1522 Form_pg_index indextup = (Form_pg_index) GETSTRUCT(tuple);
1523
1524 /*
1525 * When a pg_index row is updated, we should send out a relcache inval
1526 * for the index relation. As above, we don't know the shared status
1527 * of the index, but in practice it doesn't matter since indexes of
1528 * shared catalogs can't have such updates.
1529 */
1530 relationId = indextup->indexrelid;
1531 databaseId = MyDatabaseId;
1532 }
1533 else if (tupleRelId == ConstraintRelationId)
1534 {
1535 Form_pg_constraint constrtup = (Form_pg_constraint) GETSTRUCT(tuple);
1536
1537 /*
1538 * Foreign keys are part of relcache entries, too, so send out an
1539 * inval for the table that the FK applies to.
1540 */
1541 if (constrtup->contype == CONSTRAINT_FOREIGN &&
1542 OidIsValid(constrtup->conrelid))
1543 {
1544 relationId = constrtup->conrelid;
1545 databaseId = MyDatabaseId;
1546 }
1547 else
1548 return;
1549 }
1550 else
1551 return;
1552
1553 /*
1554 * Yes. We need to register a relcache invalidation event.
1555 */
1556 RegisterRelcacheInvalidation(info, databaseId, relationId);
1557}
1558
1559/*
1560 * CacheInvalidateHeapTuple
1561 * Register the given tuple for invalidation at end of command
1562 * (ie, current command is creating or outdating this tuple) and end of
1563 * transaction. Also, detect whether a relcache invalidation is implied.
1564 *
1565 * For an insert or delete, tuple is the target tuple and newtuple is NULL.
1566 * For an update, we are called just once, with tuple being the old tuple
1567 * version and newtuple the new version. This allows avoidance of duplicate
1568 * effort during an update.
1569 */
1570void
1572 HeapTuple tuple,
1573 HeapTuple newtuple)
1574{
1575 CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1577}
1578
1579/*
1580 * CacheInvalidateHeapTupleInplace
1581 * Register the given tuple for nontransactional invalidation pertaining
1582 * to an inplace update. Also, detect whether a relcache invalidation is
1583 * implied.
1584 *
1585 * Like CacheInvalidateHeapTuple(), but for inplace updates.
1586 */
1587void
1589 HeapTuple tuple,
1590 HeapTuple newtuple)
1591{
1592 CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1594}
1595
1596/*
1597 * CacheInvalidateCatalog
1598 * Register invalidation of the whole content of a system catalog.
1599 *
1600 * This is normally used in VACUUM FULL/CLUSTER, where we haven't so much
1601 * changed any tuples as moved them around. Some uses of catcache entries
1602 * expect their TIDs to be correct, so we have to blow away the entries.
1603 *
1604 * Note: we expect caller to verify that the rel actually is a system
1605 * catalog. If it isn't, no great harm is done, just a wasted sinval message.
1606 */
1607void
1609{
1610 Oid databaseId;
1611
1612 if (IsSharedRelation(catalogId))
1613 databaseId = InvalidOid;
1614 else
1615 databaseId = MyDatabaseId;
1616
1618 databaseId, catalogId);
1619}
1620
1621/*
1622 * CacheInvalidateRelcache
1623 * Register invalidation of the specified relation's relcache entry
1624 * at end of command.
1625 *
1626 * This is used in places that need to force relcache rebuild but aren't
1627 * changing any of the tuples recognized as contributors to the relcache
1628 * entry by CacheInvalidateHeapTuple. (An example is dropping an index.)
1629 */
1630void
1632{
1633 Oid databaseId;
1634 Oid relationId;
1635
1636 relationId = RelationGetRelid(relation);
1637 if (relation->rd_rel->relisshared)
1638 databaseId = InvalidOid;
1639 else
1640 databaseId = MyDatabaseId;
1641
1643 databaseId, relationId);
1644}
1645
1646/*
1647 * CacheInvalidateRelcacheAll
1648 * Register invalidation of the whole relcache at the end of command.
1649 *
1650 * This is used by alter publication as changes in publications may affect
1651 * large number of tables.
1652 */
1653void
1655{
1658}
1659
1660/*
1661 * CacheInvalidateRelcacheByTuple
1662 * As above, but relation is identified by passing its pg_class tuple.
1663 */
1664void
1666{
1667 Form_pg_class classtup = (Form_pg_class) GETSTRUCT(classTuple);
1668 Oid databaseId;
1669 Oid relationId;
1670
1671 relationId = classtup->oid;
1672 if (classtup->relisshared)
1673 databaseId = InvalidOid;
1674 else
1675 databaseId = MyDatabaseId;
1677 databaseId, relationId);
1678}
1679
1680/*
1681 * CacheInvalidateRelcacheByRelid
1682 * As above, but relation is identified by passing its OID.
1683 * This is the least efficient of the three options; use one of
1684 * the above routines if you have a Relation or pg_class tuple.
1685 */
1686void
1688{
1689 HeapTuple tup;
1690
1691 tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));
1692 if (!HeapTupleIsValid(tup))
1693 elog(ERROR, "cache lookup failed for relation %u", relid);
1695 ReleaseSysCache(tup);
1696}
1697
1698/*
1699 * CacheInvalidateRelSync
1700 * Register invalidation of the cache in logical decoding output plugin
1701 * for a database.
1702 *
1703 * This type of invalidation message is used for the specific purpose of output
1704 * plugins. Processes which do not decode WALs would do nothing even when it
1705 * receives the message.
1706 */
1707void
1709{
1711 MyDatabaseId, relid);
1712}
1713
1714/*
1715 * CacheInvalidateRelSyncAll
1716 * Register invalidation of the whole cache in logical decoding output
1717 * plugin.
1718 */
1719void
1721{
1723}
1724
1725/*
1726 * CacheInvalidateSmgr
1727 * Register invalidation of smgr references to a physical relation.
1728 *
1729 * Sending this type of invalidation msg forces other backends to close open
1730 * smgr entries for the rel. This should be done to flush dangling open-file
1731 * references when the physical rel is being dropped or truncated. Because
1732 * these are nontransactional (i.e., not-rollback-able) operations, we just
1733 * send the inval message immediately without any queuing.
1734 *
1735 * Note: in most cases there will have been a relcache flush issued against
1736 * the rel at the logical level. We need a separate smgr-level flush because
1737 * it is possible for backends to have open smgr entries for rels they don't
1738 * have a relcache entry for, e.g. because the only thing they ever did with
1739 * the rel is write out dirty shared buffers.
1740 *
1741 * Note: because these messages are nontransactional, they won't be captured
1742 * in commit/abort WAL entries. Instead, calls to CacheInvalidateSmgr()
1743 * should happen in low-level smgr.c routines, which are executed while
1744 * replaying WAL as well as when creating it.
1745 *
1746 * Note: In order to avoid bloating SharedInvalidationMessage, we store only
1747 * three bytes of the ProcNumber using what would otherwise be padding space.
1748 * Thus, the maximum possible ProcNumber is 2^23-1.
1749 */
1750void
1752{
1754
1755 /* verify optimization stated above stays valid */
1757 "MAX_BACKENDS_BITS is too big for inval.c");
1758
1759 msg.sm.id = SHAREDINVALSMGR_ID;
1760 msg.sm.backend_hi = rlocator.backend >> 16;
1761 msg.sm.backend_lo = rlocator.backend & 0xffff;
1762 msg.sm.rlocator = rlocator.locator;
1763 /* check AddCatcacheInvalidationMessage() for an explanation */
1764 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1765
1767}
1768
1769/*
1770 * CacheInvalidateRelmap
1771 * Register invalidation of the relation mapping for a database,
1772 * or for the shared catalogs if databaseId is zero.
1773 *
1774 * Sending this type of invalidation msg forces other backends to re-read
1775 * the indicated relation mapping file. It is also necessary to send a
1776 * relcache inval for the specific relations whose mapping has been altered,
1777 * else the relcache won't get updated with the new filenode data.
1778 *
1779 * Note: because these messages are nontransactional, they won't be captured
1780 * in commit/abort WAL entries. Instead, calls to CacheInvalidateRelmap()
1781 * should happen in low-level relmapper.c routines, which are executed while
1782 * replaying WAL as well as when creating it.
1783 */
1784void
1786{
1788
1790 msg.rm.dbId = databaseId;
1791 /* check AddCatcacheInvalidationMessage() for an explanation */
1792 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1793
1795}
1796
1797
1798/*
1799 * CacheRegisterSyscacheCallback
1800 * Register the specified function to be called for all future
1801 * invalidation events in the specified cache. The cache ID and the
1802 * hash value of the tuple being invalidated will be passed to the
1803 * function.
1804 *
1805 * NOTE: Hash value zero will be passed if a cache reset request is received.
1806 * In this case the called routines should flush all cached state.
1807 * Yes, there's a possibility of a false match to zero, but it doesn't seem
1808 * worth troubling over, especially since most of the current callees just
1809 * flush all cached state anyway.
1810 */
1811void
1814 Datum arg)
1815{
1816 if (cacheid < 0 || cacheid >= SysCacheSize)
1817 elog(FATAL, "invalid cache ID: %d", cacheid);
1819 elog(FATAL, "out of syscache_callback_list slots");
1820
1821 if (syscache_callback_links[cacheid] == 0)
1822 {
1823 /* first callback for this cache */
1825 }
1826 else
1827 {
1828 /* add to end of chain, so that older callbacks are called first */
1829 int i = syscache_callback_links[cacheid] - 1;
1830
1831 while (syscache_callback_list[i].link > 0)
1834 }
1835
1840
1842}
1843
1844/*
1845 * CacheRegisterRelcacheCallback
1846 * Register the specified function to be called for all future
1847 * relcache invalidation events. The OID of the relation being
1848 * invalidated will be passed to the function.
1849 *
1850 * NOTE: InvalidOid will be passed if a cache reset request is received.
1851 * In this case the called routines should flush all cached state.
1852 */
1853void
1855 Datum arg)
1856{
1858 elog(FATAL, "out of relcache_callback_list slots");
1859
1862
1864}
1865
1866/*
1867 * CacheRegisterRelSyncCallback
1868 * Register the specified function to be called for all future
1869 * relsynccache invalidation events.
1870 *
1871 * This function is intended to be call from the logical decoding output
1872 * plugins.
1873 */
1874void
1876 Datum arg)
1877{
1879 elog(FATAL, "out of relsync_callback_list slots");
1880
1883
1885}
1886
1887/*
1888 * CallSyscacheCallbacks
1889 *
1890 * This is exported so that CatalogCacheFlushCatalog can call it, saving
1891 * this module from knowing which catcache IDs correspond to which catalogs.
1892 */
1893void
1894CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
1895{
1896 int i;
1897
1898 if (cacheid < 0 || cacheid >= SysCacheSize)
1899 elog(ERROR, "invalid cache ID: %d", cacheid);
1900
1901 i = syscache_callback_links[cacheid] - 1;
1902 while (i >= 0)
1903 {
1904 struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
1905
1906 Assert(ccitem->id == cacheid);
1907 ccitem->function(ccitem->arg, cacheid, hashvalue);
1908 i = ccitem->link - 1;
1909 }
1910}
1911
1912/*
1913 * CallSyscacheCallbacks
1914 */
1915void
1917{
1918 for (int i = 0; i < relsync_callback_count; i++)
1919 {
1920 struct RELSYNCCALLBACK *ccitem = relsync_callback_list + i;
1921
1922 ccitem->function(ccitem->arg, relid);
1923 }
1924}
1925
1926/*
1927 * LogLogicalInvalidations
1928 *
1929 * Emit WAL for invalidations caused by the current command.
1930 *
1931 * This is currently only used for logging invalidations at the command end
1932 * or at commit time if any invalidations are pending.
1933 */
1934void
1936{
1937 xl_xact_invals xlrec;
1938 InvalidationMsgsGroup *group;
1939 int nmsgs;
1940
1941 /* Quick exit if we haven't done anything with invalidation messages. */
1942 if (transInvalInfo == NULL)
1943 return;
1944
1946 nmsgs = NumMessagesInGroup(group);
1947
1948 if (nmsgs > 0)
1949 {
1950 /* prepare record */
1951 memset(&xlrec, 0, MinSizeOfXactInvals);
1952 xlrec.nmsgs = nmsgs;
1953
1954 /* perform insertion */
1958 XLogRegisterData(msgs,
1959 n * sizeof(SharedInvalidationMessage)));
1961 XLogRegisterData(msgs,
1962 n * sizeof(SharedInvalidationMessage)));
1964 }
1965}
int16_t int16
Definition: c.h:497
int8_t int8
Definition: c.h:496
uint32_t uint32
Definition: c.h:502
#define StaticAssertStmt(condition, errmessage)
Definition: c.h:909
#define OidIsValid(objectId)
Definition: c.h:746
bool IsToastRelation(Relation relation)
Definition: catalog.c:206
bool IsCatalogRelation(Relation relation)
Definition: catalog.c:104
bool IsSharedRelation(Oid relationId)
Definition: catalog.c:304
void PrepareToInvalidateCacheTuple(Relation relation, HeapTuple tuple, HeapTuple newtuple, void(*function)(int, uint32, Oid, void *), void *context)
Definition: catcache.c:2376
void CatalogCacheFlushCatalog(Oid catId)
Definition: catcache.c:834
void ResetCatalogCachesExt(bool debug_discard)
Definition: catcache.c:804
static int recursion_depth
Definition: elog.c:150
#define FATAL
Definition: elog.h:41
#define ERROR
Definition: elog.h:39
#define elog(elevel,...)
Definition: elog.h:225
#define DEBUG4
Definition: elog.h:27
volatile uint32 CritSectionCount
Definition: globals.c:46
char * DatabasePath
Definition: globals.c:105
Oid MyDatabaseId
Definition: globals.c:95
Assert(PointerIsAligned(start, uint64))
#define HeapTupleIsValid(tuple)
Definition: htup.h:78
static void * GETSTRUCT(const HeapTupleData *tuple)
Definition: htup_details.h:728
#define INJECTION_POINT(name)
void PostPrepare_Inval(void)
Definition: inval.c:993
void InvalidateSystemCachesExtended(bool debug_discard)
Definition: inval.c:785
void CallRelSyncCallbacks(Oid relid)
Definition: inval.c:1916
static void AddCatcacheInvalidationMessage(InvalidationMsgsGroup *group, int id, uint32 hashValue, Oid dbId)
Definition: inval.c:425
void CacheInvalidateRelSyncAll(void)
Definition: inval.c:1720
static void AddCatalogInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid catId)
Definition: inval.c:453
static void RegisterRelcacheInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
Definition: inval.c:632
static int relcache_callback_count
Definition: inval.c:294
#define NumMessagesInGroup(group)
Definition: inval.c:207
static void AddRelcacheInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid relId)
Definition: inval.c:471
static int relsync_callback_count
Definition: inval.c:302
static void AddRelsyncInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid relId)
Definition: inval.c:505
void LogLogicalInvalidations(void)
Definition: inval.c:1935
void CacheInvalidateHeapTupleInplace(Relation relation, HeapTuple tuple, HeapTuple newtuple)
Definition: inval.c:1588
void AcceptInvalidationMessages(void)
Definition: inval.c:930
static void ProcessInvalidationMessages(InvalidationMsgsGroup *group, void(*func)(SharedInvalidationMessage *msg))
Definition: inval.c:574
void CacheInvalidateRelmap(Oid databaseId)
Definition: inval.c:1785
void LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
Definition: inval.c:823
struct TransInvalidationInfo TransInvalidationInfo
static void RegisterCatcacheInvalidation(int cacheId, uint32 hashValue, Oid dbId, void *context)
Definition: inval.c:604
#define CatCacheMsgs
Definition: inval.c:171
void CacheInvalidateCatalog(Oid catalogId)
Definition: inval.c:1608
#define ProcessMessageSubGroupMulti(group, subgroup, codeFragment)
Definition: inval.c:402
static struct RELSYNCCALLBACK relsync_callback_list[MAX_RELSYNC_CALLBACKS]
static InvalidationInfo * inplaceInvalInfo
Definition: inval.c:257
static void AppendInvalidationMessageSubGroup(InvalidationMsgsGroup *dest, InvalidationMsgsGroup *src, int subgroup)
Definition: inval.c:360
static void RegisterSnapshotInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
Definition: inval.c:672
struct InvalidationInfo InvalidationInfo
static struct SYSCACHECALLBACK syscache_callback_list[MAX_SYSCACHE_CALLBACKS]
static struct RELCACHECALLBACK relcache_callback_list[MAX_RELCACHE_CALLBACKS]
static TransInvalidationInfo * transInvalInfo
Definition: inval.c:255
void CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
Definition: inval.c:1894
int xactGetCommittedInvalidationMessages(SharedInvalidationMessage **msgs, bool *RelcacheInitFileInval)
Definition: inval.c:1012
#define ProcessMessageSubGroup(group, subgroup, codeFragment)
Definition: inval.c:384
void CacheInvalidateRelcache(Relation relation)
Definition: inval.c:1631
static InvalidationInfo * PrepareInvalidationState(void)
Definition: inval.c:682
static void AppendInvalidationMessages(InvalidationMsgsGroup *dest, InvalidationMsgsGroup *src)
Definition: inval.c:560
#define MAX_RELSYNC_CALLBACKS
Definition: inval.c:274
static void ProcessInvalidationMessagesMulti(InvalidationMsgsGroup *group, void(*func)(const SharedInvalidationMessage *msgs, int n))
Definition: inval.c:586
int inplaceGetInvalidationMessages(SharedInvalidationMessage **msgs, bool *RelcacheInitFileInval)
Definition: inval.c:1088
void CacheInvalidateRelcacheByRelid(Oid relid)
Definition: inval.c:1687
void InvalidateSystemCaches(void)
Definition: inval.c:916
void AtEOXact_Inval(bool isCommit)
Definition: inval.c:1199
#define MAX_SYSCACHE_CALLBACKS
Definition: inval.c:272
void CacheInvalidateSmgr(RelFileLocatorBackend rlocator)
Definition: inval.c:1751
#define SetGroupToFollow(targetgroup, priorgroup)
Definition: inval.c:198
void AtEOSubXact_Inval(bool isCommit)
Definition: inval.c:1310
static void AddSnapshotInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid relId)
Definition: inval.c:533
static int16 syscache_callback_links[SysCacheSize]
Definition: inval.c:284
static void AddInvalidationMessage(InvalidationMsgsGroup *group, int subgroup, const SharedInvalidationMessage *msg)
Definition: inval.c:320
void PreInplace_Inval(void)
Definition: inval.c:1250
struct InvalMessageArray InvalMessageArray
void CommandEndInvalidationMessages(void)
Definition: inval.c:1409
void AtInplace_Inval(void)
Definition: inval.c:1263
static void RegisterCatalogInvalidation(InvalidationInfo *info, Oid dbId, Oid catId)
Definition: inval.c:621
#define MAX_RELCACHE_CALLBACKS
Definition: inval.c:273
void CacheRegisterRelcacheCallback(RelcacheCallbackFunction func, Datum arg)
Definition: inval.c:1854
void CacheRegisterRelSyncCallback(RelSyncCallbackFunction func, Datum arg)
Definition: inval.c:1875
void ForgetInplace_Inval(void)
Definition: inval.c:1286
#define SetSubGroupToFollow(targetgroup, priorgroup, subgroup)
Definition: inval.c:191
struct InvalidationMsgsGroup InvalidationMsgsGroup
void CacheInvalidateRelSync(Oid relid)
Definition: inval.c:1708
int debug_discard_caches
Definition: inval.c:260
static InvalidationInfo * PrepareInplaceInvalidationState(void)
Definition: inval.c:752
void CacheInvalidateHeapTuple(Relation relation, HeapTuple tuple, HeapTuple newtuple)
Definition: inval.c:1571
static void CacheInvalidateHeapTupleCommon(Relation relation, HeapTuple tuple, HeapTuple newtuple, InvalidationInfo *(*prepare_callback)(void))
Definition: inval.c:1436
void CacheInvalidateRelcacheByTuple(HeapTuple classTuple)
Definition: inval.c:1665
static InvalMessageArray InvalMessageArrays[2]
Definition: inval.c:181
static int syscache_callback_count
Definition: inval.c:286
static void RegisterRelsyncInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
Definition: inval.c:660
void ProcessCommittedInvalidationMessages(SharedInvalidationMessage *msgs, int nmsgs, bool RelcacheInitFileInval, Oid dbid, Oid tsid)
Definition: inval.c:1135
void CacheInvalidateRelcacheAll(void)
Definition: inval.c:1654
#define RelCacheMsgs
Definition: inval.c:172
void CacheRegisterSyscacheCallback(int cacheid, SyscacheCallbackFunction func, Datum arg)
Definition: inval.c:1812
void(* SyscacheCallbackFunction)(Datum arg, int cacheid, uint32 hashvalue)
Definition: inval.h:23
void(* RelcacheCallbackFunction)(Datum arg, Oid relid)
Definition: inval.h:24
void(* RelSyncCallbackFunction)(Datum arg, Oid relid)
Definition: inval.h:25
int i
Definition: isn.c:77
void * MemoryContextAlloc(MemoryContext context, Size size)
Definition: mcxt.c:1260
void * MemoryContextAllocZero(MemoryContext context, Size size)
Definition: mcxt.c:1294
MemoryContext TopTransactionContext
Definition: mcxt.c:170
void * repalloc(void *pointer, Size size)
Definition: mcxt.c:2170
void pfree(void *pointer)
Definition: mcxt.c:2150
void * palloc0(Size size)
Definition: mcxt.c:1973
void * palloc(Size size)
Definition: mcxt.c:1943
MemoryContext CurTransactionContext
Definition: mcxt.c:171
#define VALGRIND_MAKE_MEM_DEFINED(addr, size)
Definition: memdebug.h:26
#define IsBootstrapProcessingMode()
Definition: miscadmin.h:477
FormData_pg_attribute * Form_pg_attribute
Definition: pg_attribute.h:202
void * arg
FormData_pg_class * Form_pg_class
Definition: pg_class.h:156
FormData_pg_constraint * Form_pg_constraint
FormData_pg_index * Form_pg_index
Definition: pg_index.h:70
uintptr_t Datum
Definition: postgres.h:69
static Datum ObjectIdGetDatum(Oid X)
Definition: postgres.h:257
#define InvalidOid
Definition: postgres_ext.h:35
unsigned int Oid
Definition: postgres_ext.h:30
#define MAX_BACKENDS_BITS
Definition: procnumber.h:38
#define RelationGetRelid(relation)
Definition: rel.h:516
void RelationCacheInvalidate(bool debug_discard)
Definition: relcache.c:2994
void RelationCacheInitFilePostInvalidate(void)
Definition: relcache.c:6882
void RelationCacheInitFilePreInvalidate(void)
Definition: relcache.c:6857
bool RelationIdIsInInitFile(Oid relationId)
Definition: relcache.c:6817
void RelationCacheInvalidateEntry(Oid relationId)
Definition: relcache.c:2938
static void AssertCouldGetRelation(void)
Definition: relcache.h:44
void RelationMapInvalidate(bool shared)
Definition: relmapper.c:468
char * GetDatabasePath(Oid dbOid, Oid spcOid)
Definition: relpath.c:110
void SendSharedInvalidMessages(const SharedInvalidationMessage *msgs, int n)
Definition: sinval.c:47
void ReceiveSharedInvalidMessages(void(*invalFunction)(SharedInvalidationMessage *msg), void(*resetFunction)(void))
Definition: sinval.c:69
#define SHAREDINVALCATALOG_ID
Definition: sinval.h:68
#define SHAREDINVALRELSYNC_ID
Definition: sinval.h:114
#define SHAREDINVALSMGR_ID
Definition: sinval.h:86
#define SHAREDINVALSNAPSHOT_ID
Definition: sinval.h:105
#define SHAREDINVALRELCACHE_ID
Definition: sinval.h:77
#define SHAREDINVALRELMAP_ID
Definition: sinval.h:97
void smgrreleaserellocator(RelFileLocatorBackend rlocator)
Definition: smgr.c:443
void InvalidateCatalogSnapshot(void)
Definition: snapmgr.c:443
SharedInvalidationMessage * msgs
Definition: inval.c:177
bool RelcacheInitFileInval
Definition: inval.c:236
InvalidationMsgsGroup CurrentCmdInvalidMsgs
Definition: inval.c:233
RelcacheCallbackFunction function
Definition: inval.c:290
RelSyncCallbackFunction function
Definition: inval.c:298
Datum arg
Definition: inval.c:299
RelFileLocator locator
Form_pg_class rd_rel
Definition: rel.h:111
SyscacheCallbackFunction function
Definition: inval.c:280
int16 link
Definition: inval.c:279
uint16 backend_lo
Definition: sinval.h:93
RelFileLocator rlocator
Definition: sinval.h:94
struct TransInvalidationInfo * parent
Definition: inval.c:249
struct InvalidationInfo ii
Definition: inval.c:243
InvalidationMsgsGroup PriorCmdInvalidMsgs
Definition: inval.c:246
int nmsgs
Definition: xact.h:304
void SysCacheInvalidate(int cacheId, uint32 hashValue)
Definition: syscache.c:698
void ReleaseSysCache(HeapTuple tuple)
Definition: syscache.c:269
HeapTuple SearchSysCache1(int cacheId, Datum key1)
Definition: syscache.c:221
bool RelationInvalidatesSnapshotsOnly(Oid relid)
Definition: syscache.c:722
SharedInvalCatcacheMsg cc
Definition: sinval.h:127
SharedInvalRelcacheMsg rc
Definition: sinval.h:129
SharedInvalCatalogMsg cat
Definition: sinval.h:128
SharedInvalRelSyncMsg rs
Definition: sinval.h:133
SharedInvalSmgrMsg sm
Definition: sinval.h:130
SharedInvalSnapshotMsg sn
Definition: sinval.h:132
SharedInvalRelmapMsg rm
Definition: sinval.h:131
int GetCurrentTransactionNestLevel(void)
Definition: xact.c:929
bool IsTransactionState(void)
Definition: xact.c:387
CommandId GetCurrentCommandId(bool used)
Definition: xact.c:829
#define MinSizeOfXactInvals
Definition: xact.h:307
#define XLOG_XACT_INVALIDATIONS
Definition: xact.h:175
#define XLogLogicalInfoActive()
Definition: xlog.h:126
XLogRecPtr XLogInsert(RmgrId rmid, uint8 info)
Definition: xloginsert.c:474
void XLogRegisterData(const void *data, uint32 len)
Definition: xloginsert.c:364
void XLogBeginInsert(void)
Definition: xloginsert.c:149