Age | Commit message (Collapse) | Author |
|
Mostly, we need to check whether $ENV{PG_TEST_EXTRA} is set before
doing regular expression matches against it.
|
|
Do not rely on the OS recognizing a particular locale; find the right
locale by querying the "en_US" collation.
Author: Alexander Lakhin
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/ae73f6f5-8221-c112-4640-5cda812a69de@gmail.com
|
|
There are a lot of Perl scripts in the tree, mostly code generation
and TAP tests. Occasionally, these scripts produce warnings. These
are probably always mistakes on the developer side (true positives).
Typical examples are warnings from genbki.pl or related when you make
a mess in the catalog files during development, or warnings from tests
when they massage a config file that looks different on different
hosts, or mistakes during merges (e.g., duplicate subroutine
definitions), or just mistakes that weren't noticed because there is a
lot of output in a verbose build.
This changes all warnings into fatal errors, by replacing
use warnings;
by
use warnings FATAL => 'all';
in all Perl files.
Discussion: https://www.postgresql.org/message-id/flat/06f899fd-1826-05ab-42d6-adeb1fd5e200%40eisentraut.org
|
|
If the underlying table isn't being dumped, it's useless to dump
an extended statistics object; it'll just cause errors at restore.
We have always applied similar policies to, say, indexes.
(When and if we get cross-table stats objects, it might be profitable
to think a little harder about what to do with them. But for now
there seems no point in considering a stats object as anything but
an appendage of its table.)
Rian McGuire and Tom Lane, per report from Rian McGuire.
Back-patch to supported branches.
Discussion: https://postgr.es/m/7075d3aa-3f05-44a5-b68f-47dc6a8a0550@buildkite.com
|
|
Should match the name of the related GUC variable.
Discussion: https://www.postgresql.org/message-id/da4a680a-5d8a-4663-a5c8-a3ccbf23394a@eisentraut.org
|
|
This function was originally coded with a handmade expansion
of the array subscripts. We can do it a little faster and far
more legibly today, by using unnest() WITH ORDINALITY.
While at it, let's apply the rowcount estimation support that exists
for the underlying unnest() function: reduce the default ROWS estimate
to 100 and attach array_unnest_support. I'm not sure that
array_unnest_support can do anything useful today with the call sites
that exist in information_schema, but it can't hurt, and the existing
default rowcount of 1000 is surely much too high for any of these
cases.
The psql.sql regression script is using _pg_expandarray() as a
test case for \sf+. While we could keep doing so, the new one-line
function body makes a poor test case for \sf+ row-numbering, so
switch it to print another information_schema function.
Discussion: https://postgr.es/m/1424303.1703355485@sss.pgh.pa.us
|
|
add_file_to_manifest declared its mtime argument as pg_time_t,
apparently on the principle that copy-and-paste from the backend
is fine. However, the callers are passing struct stat's st_mtime
field which is plain time_t, and add_file_to_manifest itself is
passing the value to gmtime(3) which expects plain time_t,
so the whole thing would not work at all on any platform where
those types are different. Fortunately we can just switch this
variable to time_t.
Per warnings from assorted buildfarm members.
|
|
|
|
The code was passing a scalar argument to node->restart(), but it was
expecting a hash, which causes a warning from Perl ("Odd number of
elements in hash assignment").
But the node->restart() function doesn't take a mode argument anyway.
This was probably copied from an incorrect comment (see commit
750c59d7ec). The default restart mode is already "fast", so the test
should still be semantically correct without explicitly specifying the
mode.
Discussion: https://www.postgresql.org/message-id/e3f4bf1b-63d3-408a-b07e-d35a0fdf1b98@eisentraut.org
|
|
Recently-introduced code in reconstruct.c was using "unsigned"
to store the result of read(), pg_pread(), or write(). This is
completely bogus: it breaks subsequent tests for the result being
negative, as we're being reminded of by a chorus of buildfarm
warnings. Switch to "int" as was doubtless intended. (There are
several other uses of "unsigned" in this file that also look poorly
chosen to me, but for now I'm just trying to clean up the buildfarm.)
A larger problem is that "int" is not necessarily wide enough to hold
the result: per POSIX, all these functions return ssize_t. In places
where the requested read or write length clearly fits in int, that's
academic. It may be academic anyway as long as we constrain
individual data files to 1GB, since even a readv or writev-like
operation would then not be responsible for transferring more than
1GB. Nonetheless it seems like trouble waiting to happen, so I made
a pass over readv and writev calls and fixed the result variables
where that seemed appropriate. We might want to think about changing
some of the fd.c functions to return ssize_t too, for future-proofing;
but I didn't tackle that here.
Discussion: https://postgr.es/m/1672202.1703441340@sss.pgh.pa.us
|
|
I don't think there's a real problem here, because if we reach
the loop over 'tles' then we will either find at least one
TimeLineHistoryEntry such that oldest_segno != 0, in which case
unsummarized_lsn will be initialized, or else unsummarized_tli
will remain 0 and an error will occur before unsummarized_lsn
is used for anything. But some compilers are complainining, as
reported on list by Nathan Bossart and off-list by Andrew Dunstan.
Discussion: http://postgr.es/m/20231223215147.GA69623@nathanxps13
|
|
e0b1ee17dc introduced optimization for matching B-tree scan keys required for
the directional scan. However, it incorrectly assumed that all keys required
for opposite direction scan are satisfied by _bt_first(). It has been
illustrated that with multiple scan keys over the same column, a lesser one
(according to the scan direction) could win leaving the other one unsatisfied.
Instead of relying on _bt_first() this commit introduces code that memorizes
whether there was at least one match on the page. If that's true we know that
keys required for opposite-direction scan are satisfied as soon as
corresponding values are not NULLs.
Also, this commit simplifies the description for the optimization of keys
required for the current direction scan. Now the flag used for this is named
continuescanPrechecked and means exactly that *continuescan flag is known
to be true for the last item on the page.
Reported-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAH2-Wzn0LeLcb1PdBnK0xisz8NpHkxRrMr3NWJ%2BKOK-WZ%2BQtTQ%40mail.gmail.com
Reviewed-by: Pavel Borisov
|
|
It's not necessary to keep the firstPage flag as a field of BTScanOpaqueData.
This commit makes it an argument of the _bt_readpage() function. We can easily
distinguish first-time and repeated calls (within the scan) of this function.
Reported-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAH2-Wzk4SOsw%2BtHuTFiz8U9Jqj-R77rYPkhWKODCBb1mdHACXA%40mail.gmail.com
Reviewed-by: Pavel Borisov
|
|
Follow up to dc2123400
Kyotaro Horiguchi
Discussion: https://postgr.es/m/20231222.154939.1509525390095583358.horikyota.ntt@gmail.com
Discussion: https://postgr.es/m/20231225.145124.1745560266993421173.horikyota.ntt@gmail.com
|
|
There are a lot of situations when we share the same pointer to a Bitmapset
structure across different places. In order to evade undesirable side effects
replace_relid() function should always return a copy.
Reported-by: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_wJthNtYBL%2BSsebpgF-5L2r5zFFk6xYbS0A78GKOTFHw%40mail.gmail.com
Reviewed-by: Richard Guo, Andres Freund, Ashutosh Bapat, Andrei Lepikhov
|
|
This option forces each bitmapset modification to reallocate bitmapset. This
is useful for debugging hangling pointers to bitmapset's.
Discussion: https://postgr.es/m/CAMbWs4_wJthNtYBL%2BSsebpgF-5L2r5zFFk6xYbS0A78GKOTFHw%40mail.gmail.com
Reviewed-by: Richard Guo, Andres Freund, Ashutosh Bapat, Andrei Lepikhov
|
|
New asserts validate that arguments are really bitmapsets. This should help
to early detect accesses to dangling pointers.
Discussion: https://postgr.es/m/CAMbWs4_wJthNtYBL%2BSsebpgF-5L2r5zFFk6xYbS0A78GKOTFHw%40mail.gmail.com
Reviewed-by: Richard Guo, Andres Freund, Ashutosh Bapat, Andrei Lepikhov
|
|
set_config_option() bails out early if it detects that the option to
be set is PGC_BACKEND or PGC_SU_BACKEND class and we're reading the
config file in a postmaster child; we don't want to apply any new
value in such a case. That's fine as far as it goes, but it fails
to consider the requirements of the pg_file_settings view: for that,
we need to check validity of the value even though we have no
intention to apply it. Because we didn't, even very silly values
for affected GUCs would be reported as valid by the view. There
are only half a dozen such GUCs, which perhaps explains why this
got overlooked for so long.
Fix by continuing when changeVal is false; this parallels the logic
in some other early-exit paths.
Also, the check added by commit 924bcf4f1 to prevent GUC changes in
parallel workers seems a few bricks shy of a load: it's evidently
assuming that ereport(elevel, ...) won't return. Make sure we
bail out if it does. The lack of trouble reports suggests that
this is only a latent bug, i.e. parallel workers don't actually
reach here with elevel < ERROR. (Per the code coverage report,
we never reach here at all in the regression suite.) But we clearly
don't want to risk proceeding if that does happen.
Per report from Rıdvan Korkmaz. These are ancient bugs, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/2089235.1703617353@sss.pgh.pa.us
|
|
Brown-paper-bag bug in commit 58c3151bb. Per buildfarm.
|
|
Like commit 388e80132, use "#pragma GCC system_header" to silence
warnings appearing within the Python headers, since newer Python
versions no longer worry about some restrictions we still use like
-Wdeclaration-after-statement.
This patch improves on 388e80132 by inventing a separate wrapper
header file, allowing the pragma to be tightly scoped to just
the Python headers and not other stuff we have laying about in
plpython.h. I applied the same technique to plperl for the same
reason: the original patch suppressed warnings for a good deal
of our own code, not only the Perl headers.
Like the previous commit, back-patch to supported branches.
Peter Eisentraut and Tom Lane
Discussion: https://postgr.es/m/ae523163-6d2a-4b81-a875-832e48dec502@eisentraut.org
|
|
|
|
This will eventually be replaced once some translations exist
for pg_combinebackup's messages. In the meantime, we need
something here to prevent "make" from complaining in
builds with --enable-nls. Per advice added in 88dad06b4.
|
|
Usage was removed in 6c5576075b but the definition was not removed.
|
|
Two translatable strings introduced in dc212340058b were split into two
parts, making their translation harder than necessary.
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20231225.134747.2287499067164862136.horikyota.ntt@gmail.com
|
|
In the same spirit as 5e0c761d0 and some earlier commits,
suppress a chorus of buildfarm warnings about braces in
these initializers.
Richard Guo
Discussion: https://postgr.es/m/CAMbWs48GzM-Ff7vr=_CeqaXxFBB9UntqtaW1cjU8hOo62AbOOg@mail.gmail.com
|
|
Discussion: https://postgr.es/m/18187-831da249cbd2ff8e%40postgresql.org
Author: Richard Guo
Reviewed-by: Andrei Lepikhov
|
|
Self-join removal appears to be safe to apply with placeholder variables
as long as we handle PlaceHolderVar in replace_varno_walker() and replace
relid in phinfo->ph_lateral.
Discussion: https://postgr.es/m/18187-831da249cbd2ff8e%40postgresql.org
Author: Richard Guo
Reviewed-by: Andrei Lepikhov
|
|
This commit also retires sje_walker. This increases the generalty of replacing
varno in the parse tree and simplifies the code.
Discussion: https://postgr.es/m/18187-831da249cbd2ff8e%40postgresql.org
Author: Richard Guo
Reviewed-by: Andrei Lepikhov
|
|
Bhis commit introduces enhancements to the pg_stat_checkpointer view by adding
three new columns: restartpoints_timed, restartpoints_req, and
restartpoints_done. These additions aim to improve the visibility and
monitoring of restartpoint processes on replicas.
Previously, it was challenging to differentiate between successful and failed
restartpoint requests. This limitation arises because restartpoints on replicas
are dependent on checkpoint records from the primary, and cannot occur more
frequently than these checkpoints.
The new columns allow for clear distinction and tracking of restartpoint
requests, their triggers, and successful completions. This enhancement aids
database administrators and developers in better understanding and diagnosing
issues related to restartpoint behavior, particularly in scenarios where
restartpoint requests may fail.
System catalog is changed. Catversion is bumped.
Discussion: https://postgr.es/m/99b2ccd1-a77a-962a-0837-191cdf56c2b9%40inbox.ru
Author: Anton A. Melnikov
Reviewed-by: Kyotaro Horiguchi, Alexander Korotkov
|
|
Using a scale factor large enough so as the number of rows to insert
gets larger than INT32_MAX would cause an infinite loop in
initPopulateTable(), preventing pgbench to finish its initialization.
Oversight in e35cc3b3f2d0 that has refactored the data generation logic.
Author: John Hsu
Reviewed-by: Tatsuo Ishii, Japin Li
Discussion: https://postgr.es/m/CA+-JvFvHsOafjHcuFPfkyouHNZvbOXhBNhwZxKm3WNgYz9bwzA@mail.gmail.com
|
|
Commit 664d75753 pulled 010_tab_completion.pl's infrastructure for
invoking an interactive psql session out into a generally-useful test
function, but it didn't move enough stuff. We need to set up various
environment variables that readline will look at, both to ensure
stability of test results and to prevent test actions from cluttering
the calling user's ~/.psql_history. Expecting calling scripts to
remember to do that is too failure-prone: the other existing caller
001_password.pl did not do it. Hence, remove those initialization
steps from 010_tab_completion.pl and put them into interactive_psql().
Since interactive_psql was already making a local ENV hash, this has
no effect on calling scripts.
Discussion: https://postgr.es/m/794610.1703182896@sss.pgh.pa.us
|
|
When a column is dropped, the fields attacl, attoptions, and
attfdwoptions were kept unchanged. This is probably harmless, but it
seems wasteful, and leaves potentially dangling data lying around (for
example, attacl could contain references to users that are later also
dropped).
Change this to set those fields to null when a column is marked as
dropped.
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/249d819d-1763-4580-8110-0bf91a0f08b7@eisentraut.org
|
|
Up to now, our distribution tarballs have included a plain-text form
of the installation.sgml chapter. The rationale for that was that a
recipient might not have either ready internet access or HTML-viewing
tools; a theory that seems downright quaint today. Maintaining the
ability to generate this file is not without cost, because it puts
special requirements on installation.sgml that are often overlooked.
Moreover, we are moving in the direction of making our distribution
tarballs be pure git snapshots for traceability/reproducibility
reasons; including generated files doesn't fit into that plan.
Hence, let's just drop INSTALL and remove the infrastructure for
generating it. The top-level README will now recommend visiting
our website to see the installation instructions. As a useful
side-effect, we can get rid of README.git which has provoked
confusion.
Discussion: https://postgr.es/m/20231220114927.faccqqprmuyrzdip@alap3.anarazel.de
Discussion: https://postgr.es/m/e07408d9-e5f2-d9fd-5672-f53354e9305e@eisentraut.org
|
|
Commit 1301c80b21 removed some infrastructure needed to check
windows-oriented perl scripts. It also removed most such scripts, but
this one was left over. We repair the damage by making Win32::Registry a
conditional requirement that is only loaded on Windows. With this change
`perl -cw win32tzlist.pl` once again passes on non-Windows machines.
Discussion: https://postgr.es/m/a2bd77fd-61b8-4c2b-b12e-3e22ae260f82@eisentraut.org
|
|
Commit db6d9891e8 changed them to be initialized with --no-locale, but
that reduced the test coverage for non-C locales.
Discussion: https://postgr.es/m/0d47e5ecc037b3908149aad5f2a987793cf938bd.camel%40j-davis.com
|
|
Per report from Alexander Lakhin.
Discussion: http://postgr.es/m/061cccf7-0cac-804f-4c2a-9d6da8e3848b@gmail.com
|
|
Apparently, spell check would have been a really good idea.
Alexander Lakhin, with a few additions as per an off-list report
from Andres Freund.
Discussion: http://postgr.es/m/f08f7c60-1ad3-0b57-d580-54b11f07cddf@gmail.com
|
|
Another oversight in dc2123400, visible when building/testing
in the source directory. (There's a lot of stuff we could
simplify if we stop supporting that case, but for now it's
still mainstream.)
|
|
This is necessary when spgcanreturn() is invoked on a partitioned
index, and the failure might be reachable in other scenarios as
well. The rest of what spgGetCache() does is perfectly sensible
for a partitioned index, so we should allow it to go through.
I think the main takeaway from this is that we lack sufficient test
coverage for non-btree partitioned indexes. Therefore, I added
simple test cases for brin and gin as well as spgist (hash and
gist AMs were covered already in indexing.sql).
Per bug #18256 from Alexander Lakhin. Although the known test case
only fails since v16 (3c569049b), I've got no faith at all that there
aren't other ways to reach this problem; so back-patch to all
supported branches.
Discussion: https://postgr.es/m/18256-0b0e1b6e4a620f1b@postgresql.org
|
|
Oversight in commit dc2123400, evidently.
|
|
Fix a bug during MERGE if a cross-partition update is attempted on a
partitioned table with a BEFORE DELETE ROW trigger that returns NULL,
to prevent the update. This would cause an error to be thrown, or an
assert failure in an assert-enabled build.
This was an oversight in 9321c79c86, which failed to properly
distinguish a DELETE prevented by a trigger from one prevented by a
concurrent update. Fix by having ExecDelete() return the TM_Result
status to ExecCrossPartitionUpdate(), so that it can distinguish the
two cases, and make ExecCrossPartitionUpdate() return the TM_Result
status to ExecUpdateAct(), so that it can return the correct status
from a concurrent update.
In addition, ensure that the command tag is correctly updated by
having ExecMergeMatched() pass canSetTag to ExecUpdateAct(), rather
than passing false, so that it updates the command tag if it does a
cross-partition update, making this code path in ExecMergeMatched()
consistent with ExecUpdate().
Per bug #18238 from Alexander Lakhin. Back-patch to v15, where MERGE
was introduced.
Dean Rasheed, reviewed by Richard Guo and Jian He.
Discussion: https://postgr.es/m/18238-2f2bdc7f720180b9%40postgresql.org
|
|
The callers of this function assume that the first Oid in the list
returned by this function corresponds to the immediate parent and the
last on corresponds to the topmost parent. Make that explicit in the
function prologue.
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAExHW5vCbATEmht861=G-BFPHNwLUqyeGa_=8-xibJ6Q1UxAeA@mail.gmail.com
|
|
They are only used for some tests. The tests are already set to skip
as appropriate if they are not available.
Discussion: https://www.postgresql.org/message-id/flat/ZQzp_VMJcerM1Cs_%40paquier.xyz
|
|
This is a function that makes a node jump by N WAL segments, which is
something a couple of tests have been relying on for some cases related
to streaming, replication slot limits and logical decoding on standbys.
Hence, this centralizes the logic, while making it cheaper by relying on
pg_logical_emit_message() to emit WAL records before switching to a new
segment.
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Euler Taveira
Discussion: https://postgr.es/m/CALj2ACU3R8QFCvDewHCMKjgb2w_-CMCyd6DAK=Jb-af14da5eg@mail.gmail.com
|
|
Commit 6af179395 added isCatalogRel field to some WAL record types,
but this field was not shown in the rmgr descriptions. This commit
changes the several rmgr descriptions to display the isCatalogRel
field.
Author: Bertrand Drouvot
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/957dc8f9-2a02-4640-9c01-9dcbf97c4187%40gmail.com
|
|
--show-diff becomes --diff, and --silent-diff becomes --check. These
options may now be given together. Without --check, --diff will exit
with a zero status even if diffs are found. With --check, it will now
exit with a non-zero status in that case.
Author: Tristan Partin
Reviewed-by: Daniel Gustafsson, Jelte Fennema-Nio
Discussion: https://postgr.es/m/CXLX2XYTH9S6.140SC6Y61VD88@neon.tech
|
|
The pg_dump compression was using strdup() instead of pg_strdup()
and failed to check the returned pointer for out-of-memory before
dereferencing it. Fix by using pg_strdup() instead which probably
was the intention here in the original patch.
Backpatch to v16 where pg_dump compression was introduced.
Reviewed-by: Tristan Partin <tristan@neon.tech>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/CC661D60-6F4C-474D-B9CF-E789ACA3CEFC@yesql.se
Backpatch-through: 16
|
|
To take an incremental backup, you use the new replication command
UPLOAD_MANIFEST to upload the manifest for the prior backup. This
prior backup could either be a full backup or another incremental
backup. You then use BASE_BACKUP with the INCREMENTAL option to take
the backup. pg_basebackup now has an --incremental=PATH_TO_MANIFEST
option to trigger this behavior.
An incremental backup is like a regular full backup except that
some relation files are replaced with files with names like
INCREMENTAL.${ORIGINAL_NAME}, and the backup_label file contains
additional lines identifying it as an incremental backup. The new
pg_combinebackup tool can be used to reconstruct a data directory
from a full backup and a series of incremental backups.
Patch by me. Reviewed by Matthias van de Meent, Dilip Kumar, Jakub
Wartak, Peter Eisentraut, and Álvaro Herrera. Thanks especially to
Jakub for incredibly helpful and extensive testing.
Discussion: http://postgr.es/m/CA+TgmoYOYZfMCyOXFyC-P+-mdrZqm5pP2N7S-r0z3_402h9rsA@mail.gmail.com
|
|
When active, this process writes WAL summary files to
$PGDATA/pg_wal/summaries. Each summary file contains information for a
certain range of LSNs on a certain TLI. For each relation, it stores a
"limit block" which is 0 if a relation is created or destroyed within
a certain range of WAL records, or otherwise the shortest length to
which the relation was truncated during that range of WAL records, or
otherwise InvalidBlockNumber. In addition, it stores a list of blocks
which have been modified during that range of WAL records, but
excluding blocks which were removed by truncation after they were
modified and never subsequently modified again.
In other words, it tells us which blocks need to copied in case of an
incremental backup covering that range of WAL records. But this
doesn't yet add the capability to actually perform an incremental
backup; the next patch will do that.
A new parameter summarize_wal enables or disables this new background
process. The background process also automatically deletes summary
files that are older than wal_summarize_keep_time, if that parameter
has a non-zero value and the summarizer is configured to run.
Patch by me, with some design help from Dilip Kumar and Andres Freund.
Reviewed by Matthias van de Meent, Dilip Kumar, Jakub Wartak, Peter
Eisentraut, and Álvaro Herrera.
Discussion: http://postgr.es/m/CA+TgmoYOYZfMCyOXFyC-P+-mdrZqm5pP2N7S-r0z3_402h9rsA@mail.gmail.com
|
|
First, mark the xlblocks member with InvalidXLogRecPtr, then issue a
write barrier, then initialize it. That ensures that the xlblocks
member doesn't appear valid while the contents are being initialized.
In preparation for reading WAL buffer contents without a lock.
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACVfFMfqD5oLzZSQQZWfXiJqd-NdX0_317veP6FuB31QWA@mail.gmail.com
Reviewed-by: Andres Freund
|