summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTom Lane2012-05-22 23:42:05 +0000
committerTom Lane2012-05-22 23:42:05 +0000
commited962fd712bbc0836437c8f789d9152aca5711b5 (patch)
tree0c236031cce28931812ef19d7a61c2b29cc37a76
parent92a953fbf8c90c3b316fbc275767efb6994f1589 (diff)
Ensure that seqscans check for interrupts at least once per page.
If a seqscan encounters many consecutive pages containing only dead tuples, it can remain in the loop in heapgettup for a long time, and there was no CHECK_FOR_INTERRUPTS anywhere in that loop. This meant there were real-world situations where a query would be effectively uncancelable for long stretches. Add a check placed to occur once per page, which should be enough to provide reasonable response time without adding any measurable overhead. Report and patch by Merlin Moncure (though I tweaked it a bit). Back-patch to all supported branches.
-rw-r--r--src/backend/access/heap/heapam.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 0d6fe3f0ac..0c67156390 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -222,6 +222,13 @@ heapgetpage(HeapScanDesc scan, BlockNumber page)
scan->rs_cbuf = InvalidBuffer;
}
+ /*
+ * Be sure to check for interrupts at least once per page. Checks at
+ * higher code levels won't be able to stop a seqscan that encounters
+ * many pages' worth of consecutive dead tuples.
+ */
+ CHECK_FOR_INTERRUPTS();
+
/* read page using selected strategy */
scan->rs_cbuf = ReadBufferExtended(scan->rs_rd, MAIN_FORKNUM, page,
RBM_NORMAL, scan->rs_strategy);