Fix more hash index bugs around marking buffers dirty.
authorRobert Haas <rhaas@postgresql.org>
Fri, 16 Dec 2016 14:52:04 +0000 (09:52 -0500)
committerRobert Haas <rhaas@postgresql.org>
Fri, 16 Dec 2016 14:55:20 +0000 (09:55 -0500)
In _hash_freeovflpage(), if we're freeing the overflow page that
immediate follows the page to which tuples are being moved (the
confusingly-named "write buffer"), don't forget to mark that
page dirty after updating its hasho_nextblkno.

In _hash_squeezebucket(), it's not necessary to mark the primary
bucket page dirty if there are no overflow pages, because there's
nothing to squeeze in that case.

Amit Kapila, with help from Kuntal Ghosh and Dilip Kumar, after
an initial trouble report by Jeff Janes.

src/backend/access/hash/hashovfl.c

index 8fbf49461d1b21b131d99c1a45c8ed374b070828..5f1513bb43c33965096a901fbf91d2a323dbf000 100644 (file)
@@ -452,6 +452,11 @@ _hash_freeovflpage(Relation rel, Buffer ovflbuf, Buffer wbuf,
            MarkBufferDirty(prevbuf);
            _hash_relbuf(rel, prevbuf);
        }
+       else
+       {
+           /* ensure to mark prevbuf as dirty */
+           wbuf_dirty = true;
+       }
    }
 
    /* write and unlock the write buffer */
@@ -643,7 +648,7 @@ _hash_squeezebucket(Relation rel,
     */
    if (!BlockNumberIsValid(wopaque->hasho_nextblkno))
    {
-       _hash_chgbufaccess(rel, wbuf, HASH_WRITE, HASH_NOLOCK);
+       _hash_chgbufaccess(rel, wbuf, HASH_READ, HASH_NOLOCK);
        return;
    }