Age | Commit message (Collapse) | Author |
|
Run program_XXX tests instead of its own tests. This ensures
consistency with the test suites of other programs and enforces common
policies, such as help line length.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Discussion: https://www.postgresql.org/message-id/flat/OSCPR01MB14966247015B7E3D8D340D022F56FA@OSCPR01MB14966.jpnprd01.prod.outlook.com
|
|
Delete an intermediate variable, a redundant cast, a use of long and a
use of long long. scanf() the seed directly into a uint64, now that we
can do that with SCNu64 from <inttypes.h>.
The previous coding was from pre-C99 times when %lld might not have been
there, so it read into an unsigned long. Therefore behavior varied
by OS, and --random-seed would accept either 32 or 64 bit seeds. Now
it's the same everywhere.
Author: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/b936d2fb-590d-49c3-a615-92c3a88c6c19%40eisentraut.org
|
|
We have collected several instances of a workaround for GCC bug 53119,
which caused false-positive compiler warnings. This bug has long been
fixed, but was still seen on the buildfarm, most recently on lapwing
with gcc (Debian 4.7.2-5). (The GCC bug tracker mentions that a fix
was backported to 4.7.4 and 4.8.3.)
That compiler no longer runs warning-free since commit 6fdd5d95634, so
we don't need to keep these workarounds. And furthermore, the
consensus appears to be that we don't want to keep supporting that era
of platform anymore at all.
This reverts the following commits:
d937904cce6a3d82e4f9c2127de7b59105a134b3
506428d091760650971433f6bc083531c307b368
b449afb582bb9015bfbb85abc10ce122aef9ec70
6392f2a0968c20ecde4d27b6652703ad931fce92
bad0763a4d7be3005eae35d460c73ac4bc7ebaad
5e0c761d0a13c7b4f7c5de618ac38560d74d74d0
and makes a few similar fixes to newer code.
Discussion: https://www.postgresql.org/message-id/flat/e170d61f-01ab-4cf9-ab68-91cd1fac62c5%40eisentraut.org
Discussion: https://www.postgresql.org/message-id/flat/CA%2BTgmoYEAm-KKZibAP3hSqbTFTjUd47XtVcf3xSFDpyecXX9uQ%40mail.gmail.com
|
|
This commit reshapes the grammar of some commands to apply a more
consistent style across the board, following rules similar to
ce1b0f9da03e:
- Elimination of some pointless used-once variables.
- Use of long options, to self-document better the options used.
- Use of fat commas to link option names and their assigned values,
including redirections, so as perltidy can be tricked to put them
together.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/87jz8rzf3h.fsf@wibble.ilmari.org
|
|
We want to support a "noreturn" decoration on more compilers besides
just GCC-compatible ones, but for that we need to move the decoration
in front of the function declaration instead of either behind it or
wherever, which is the current style afforded by GCC-style attributes.
Also rename the macro to "pg_noreturn" to be similar to the C11
standard "noreturn".
pg_noreturn is now supported on all compilers that support C11 (using
_Noreturn), as well as GCC-compatible ones (using __attribute__, as
before), as well as MSVC (using __declspec). (When PostgreSQL
requires C11, the latter two variants can be dropped.)
Now, all supported compilers effectively support pg_noreturn, so the
extra code for !HAVE_PG_ATTRIBUTE_NORETURN can be dropped.
This also fixes a possible problem if third-party code includes
stdnoreturn.h, because then the current definition of
#define pg_attribute_noreturn() __attribute__((noreturn))
would cause an error.
Note that the C standard does not support a noreturn attribute on
function pointer types. So we have to drop these here. There are
only two instances at this time, so it's not a big loss. In one case,
we can make up for it by adding the pg_noreturn to a wrapper function
and adding a pg_unreachable(), in the other case, the latter was
already done before.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/pxr5b3z7jmkpenssra5zroxi7qzzp6eswuggokw64axmdixpnk@zbwxuq7gbbcw
|
|
ParseScript only needs the lineno for meta-commands, so let's not
bother computing it otherwise. While this doesn't save much given
the previous patch, there's no point in doing unnecessary work.
While we're at it, avoid calling psql_scan_get_location() twice for
a meta-command.
One reason for making this change is that the line number computed
in ParseScript's main loop was actually wrong in most cases: it
would point just past the semicolon of the previous SQL command,
not at what the user thinks the current command's line number is.
We could add some code to skip whitespace before capturing the line
number, but it would be pretty pointless at present. Just move the
call to avoid the temptation to rely on that value. (Once we've
lexed the backslash, the computed line number will be right.)
This change also means that pgbench never inquires about the
location before it's lexed something, so that the care taken in
the previous patch to behave sanely in that case is unnecessary.
It seems best to keep that logic, though, as future callers
might depend on it.
Author: Daniel Vérité <daniel@manitou-mail.org>
Discussion: https://postgr.es/m/84a8a89e-adb8-47a9-9d34-c13f7150ee45@manitou-mail.org
|
|
pgbench wants to record the starting line number of each command
in its scripts. It was computing that by scanning from the script
start and counting newlines, so that O(N^2) work had to be done
for an N-command script. In a script with 50K lines, this adds
up to about 10 seconds on my machine.
To add insult to injury, the results were subtly wrong, because
expr_scanner_offset() scanned to find the NUL that flex inserts
at the end of the current token --- and before the first yylex
call, no such NUL has been inserted. So we ended by computing the
script's last line number not its first one. This was visible only
in case of \gset at the start of a script, which perhaps accounts
for the lack of complaints.
To fix, steal an idea from plpgsql and track the current lexer
ending position and line count as we advance through the script.
(It's a bit simpler than plpgsql since we can't need to back up.)
Also adjust a couple of other places that were invoking scans
from script start when they didn't really need to. I made a new
psqlscan function psql_scan_get_location() that replaces both
expr_scanner_offset() and expr_scanner_get_lineno(), since in
practice expr_scanner_get_lineno() was only being invoked to find
the line number of the current lexer end position.
Reported-by: Daniel Vérité <daniel@manitou-mail.org>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/84a8a89e-adb8-47a9-9d34-c13f7150ee45@manitou-mail.org
|
|
pgbench already had code to check if the soft rlimit is too low for the
specified number of connections. If too low, it errored out, telling the user
to increase the limit.
However, we can do better: If the hard limit allows, increase the soft limit
to be sufficiently for the number of connections.
It is common for the soft limit to be considerably lower than the hard limit,
due to the danger of soft limits > 1024 breaking programs that use the
select(2), as explained in [1].
[1]: https://0pointer.net/blog/file-descriptor-limits.html
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAGECzQQh6VSy3KG4pN1d%3Dh9J%3DD1rStFCMR%2Bt7yh_Kwj-g87aLQ%40mail.gmail.com
|
|
pgbench client-side data generation uses COPY FREEZE to load data for most
tables. COPY FREEZE isn't supported for partitioned tables and since pgbench
only supports partitioning pgbench_accounts, pgbench used a hard-coded check to
skip COPY FREEZE and use plain COPY for a partitioned pgbench_accounts.
If the user has manually partitioned one of the other pgbench tables, this
causes client-side data generation to error out with:
ERROR: cannot perform COPY FREEZE on a partitioned table
Fix this by limiting COPY FREEZE to ordinary tables (RELKIND_RELATION).
Author: Sergey Tatarintsev <s.tatarintsev@postgrespro.ru>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/flat/97f55fca-8a7b-4da8-b413-7d1c57010676%40postgrespro.ru
|
|
Commit af35fe501 caused "pgbench -i" to emit a '\r' character
for each data row loaded (when stderr is a terminal).
That's effectively invisible on-screen, but it causes the
connected terminal program to consume a lot of cycles.
It's even worse if you're connected over ssh, as the data
then has to pass through the ssh tunnel.
Simplest fix is to move the added logic inside the if-tests
that check whether to print a progress line. We could do
it another way that avoids duplicating these few lines,
but on the whole this seems the most transparent way to
write it.
Like the previous commit, back-patch to all supported versions.
Reported-by: Andres Freund <andres@anarazel.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/4k4drkh7bcmdezq6zbkhp25mnrzpswqi2o75d5uv2eeg3aq6q7@b7kqdmzzwzgb
Backpatch-through: 13
|
|
Instead of passing the parse result from yyparse() via a global
variable, pass it via a function output argument.
This complements earlier work to make the parsers reentrant.
Discussion: Discussion: https://www.postgresql.org/message-id/flat/eb6faeac-2a8a-4b69-9189-c33c520e5b7b@eisentraut.org
|
|
This commit rewrites a good chunk of the command arrays in TAP tests
with a grammar based on the following rules:
- Fat commas are used between option names and their values, making it
clear to both humans and perltidy that values and names are bound
together. This is particularly useful for the readability of multi-line
command arrays, and there are plenty of them in the TAP tests. Most of
the test code is updated to use this style. Some commands used
parenthesis to show the link, or attached values and options in a single
string. These are updated to use fat commas instead.
- Option names are switched to use their long names, making them more
self-documented. Based on a suggestion by Andrew Dunstan.
- Add some trailing commas after the last item in multi-line arrays,
which is a common perl style.
Not all the places are taken care of, but this covers a very good chunk
of them.
Author: Dagfinn Ilmari Mannsåker
Reviewed-by: Michael Paquier, Peter Smith, Euler Taveira
Discussion: https://postgr.es/m/87jzc46d8u.fsf@wibble.ilmari.org
|
|
This removes all the various workarounds for avoiding compiler
warnings with Flex 2.5.35. Several recent patches have added
additional warnings that would either need to be fixed along the lines
of the existing workarounds, or we decide to no longer care about
this, which we do here.
Flex 2.5.35 is extremely outdated, and you can't even download it
anymore from any of the Flex project sites, so it's nearly impossible
to support.
After this, using Flex 2.5.35 will still work, but the generated code
will produce numerous compiler warnings.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/1a204ccd-7ae6-478c-a431-407b5c48ccc6@eisentraut.org
|
|
Backpatch-through: 13
|
|
Trying to clean up the code a bit while we're working on these files
for the reentrant scanner/pure parser patches. This cleanup only
touches the code sections after the second '%%' in each file, via a
manually-supervised and locally hacked up pgindent.
|
|
An \if command appearing within a false (not-to-be-executed) \if
branch was incorrectly treated the same as \elif. This could allow
statements within the inner \if to be executed when they should
not be. Also the missing inner \if stack entry would result in an
assertion failure (in assert-enabled builds) when the final \endif
is reached.
Report and patch by Michail Nikolaev. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/CANtu0oiA1ke=SP6tauhNqkUdv5QFsJtS1p=aOOf_iU+EhyKkjQ@mail.gmail.com
|
|
Redefine our exact width types with standard C99 types and macros,
including int64_t, INT64_MAX, INT64_C(), PRId64 etc. We were already
using <stdint.h> types in a few places.
One complication is that Windows' <inttypes.h> uses format strings like
"%I64d", "%I32", "%I" for PRI*64, PRI*32, PTR*PTR, instead of mapping to
other standardized format strings like "%lld" etc as seen on other known
systems. Teach our snprintf.c to understand them.
This removes a lot of configure clutter, and should also allow 64-bit
numbers and other standard types to be used in localized messages
without casting.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/ME3P282MB3166F9D1F71F787929C0C7E7B6312%40ME3P282MB3166.AUSP282.PROD.OUTLOOK.COM
|
|
During pgbench's table initialization, progress updates could display
leftover characters from the previous message if the new message
was shorter. This commit resolves the issue by appending spaces to
the current message to fully overwrite any remaining characters from
the previous line.
Back-patch to all the supported versions.
Author: Yushi Ogiwara, Tatsuo Ishii, Fujii Masao
Reviewed-by: Tatsuo Ishii, Fujii Masao
Discussion: https://postgr.es/m/9a9b8b95b6a709877ae48ad5b0c59bb9@oss.nttdata.com
|
|
When using a pipeline, a transaction starts from the first command and
is committed with a Sync message or when the pipeline ends.
Functions like IsInTransactionBlock() or PreventInTransactionBlock()
were already able to understand a pipeline as being in a transaction
block, but it was not the case of CheckTransactionBlock(). This
function is called for example to generate a WARNING for SET LOCAL,
complaining that it is used outside of a transaction block.
The current state of the code caused multiple problems, like:
- SET LOCAL executed at any stage of a pipeline issued a WARNING, even
if the command was at least second in line where the pipeline is in a
transaction state.
- LOCK TABLE failed when invoked at any step of a pipeline, even if it
should be able to work within a transaction block.
The pipeline protocol assumes that the first command of a pipeline is
not part of a transaction block, and that any follow-up commands is
considered as within a transaction block.
This commit changes the backend so as an implicit transaction block is
started each time the first Execute message of a pipeline has finished
processing, with this implicit transaction block ended once a sync is
processed. The checks based on XACT_FLAGS_PIPELINING in the routines
checking if we are in a transaction block are not necessary: it is
enough to rely on the existing ones.
Some tests are added to pgbench, that can be backpatched down to v17
when \syncpipeline is involved and down to v14 where \startpipeline and
\endpipeline are available. This is unfortunately limited regarding the
error patterns that can be checked, but it provides coverage for various
pipeline combinations to check if these succeed or fail. These tests
are able to capture the case of SET LOCAL's WARNING. The author has
proposed a different feature to improve the coverage by adding similar
meta-commands to psql where error messages could be checked, something
more useful for the cases where commands cannot be used in transaction
blocks, like REINDEX CONCURRENTLY or VACUUM. This is considered as
future work for v18~.
Author: Anthonin Bonnefoy
Reviewed-by: Jelte Fennema-Nio, Michael Paquier
Discussion: https://postgr.es/m/CAO6_XqrWO8uNBQrSu5r6jh+vTGi5Oiyk4y8yXDORdE2jbzw8xw@mail.gmail.com
Backpatch-through: 13
|
|
Fix typo in commit cae0f3c405.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/70eaa41b-805b-ce19-6004-5a0dccd3f731%40gmail.com
|
|
Previously, per-script statistics were never output when all
transactions failed due to serialization or deadlock errors. However,
it is reasonable to report such information if there are ones even
when there are no successful transaction since these failed
transactions are now objects to be reported.
Meanwhile, if the total number of successful, skipped, and failed
transactions is zero, we don't have to report the number of failed
transactions as similar to the number of skipped transactions, which
avoids to print "NaN%" in lines on failed transaction reports.
Also, the number of transactions in per-script results now includes
skipped and failed transactions. It prevents to print "total of NaN%"
when any transactions are not successfully processed. The number of
transactions actually processed per-script and TPS based on it are now
output explicitly in a separate line.
Author: Yugo Nagata
Reviewed-by: Tatsuo Ishii
Discussion: https://postgr.es/m/20240921003544.2436ef8da9c5c8cb963c651b%40sraoss.co.jp
|
|
The following commands were allowed on partitioned tables, with
different effects:
1) ALTER TABLE SET [UN]LOGGED did not issue an error, and did not update
pg_class.relpersistence.
2) CREATE UNLOGGED TABLE was working with pg_class.relpersistence marked
as initially defined, but partitions did not inherit the UNLOGGED
property, which was confusing.
This commit causes the commands mentioned above to fail for partitioned
tables, instead.
pg_dump is tweaked so as partitioned tables marked as UNLOGGED ignore
the option when dumped from older server versions. pgbench needs a
tweak for --unlogged and --partitions=N to ignore the UNLOGGED option on
the partitioned tables created, its partitions still being unlogged.
Author: Michael Paquier
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/ZiiyGFTBNkqcMQi_@paquier.xyz
|
|
Various buildfarm critters were complaining about
pgbench.c:304:1: warning: 'static' is not at beginning of declaration [-Wold-style-declaration]
Evidently a thinko in 720b0eaae.
|
|
These probably should have been static all along, it was only
forgotten out of sloppiness.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
|
|
This covers both regular and inplace changes, since bugs arise at their
intersection. Where marked, these witness extant bugs. Back-patch to
v12 (all supported versions).
Reviewed (in an earlier version) by Robert Haas.
Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
|
|
After further review, we want to move in the direction of always
quoting GUC names in error messages, rather than the previous (PG16)
wildly mixed practice or the intermittent (mid-PG17) idea of doing
this depending on how possibly confusing the GUC name is.
This commit applies appropriate quotes to (almost?) all mentions of
GUC names in error messages. It partially supersedes a243569bf65 and
8d9978a7176, which had moved things a bit in the opposite direction
but which then were abandoned in a partial state.
Author: Peter Smith <smithpb2250@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAHut%2BPv-kSN8SkxSdoHano_wPubqcg5789ejhCDZAcLFceBR-w%40mail.gmail.com
|
|
Run pgindent, pgperltidy, and reformat-dat-files.
The pgindent part of this is pretty small, consisting mainly of
fixing up self-inflicted formatting damage from patches that
hadn't bothered to add their new typedefs to typedefs.list.
In order to keep it from making anything worse, I manually added
a dozen or so typedefs that appeared in the existing typedefs.list
but not in the buildfarm's list. Perhaps we should formalize that,
or better find a way to get those typedefs into the automatic list.
pgperltidy is as opinionated as always, and reformat-dat-files too.
|
|
Many other utilities use -d to specify the database to use, but
pgbench uses it to enable debug mode. This is causing some users
to accidentally enable it. This commit changes -d to accept the
database name and introduces --dbname. Debug mode can still be
enabled with --debug. This is a backward-incompatible change, but
it has been judged to be worth the trade-off, i.e., some scripts
that use pgbench will need to be updated.
Author: Greg Sabino Mullane
Reviewed-by: Tomas Vondra, Euler Taveira, Alvaro Herrera, David Christensen
Discussion: https://postgr.es/m/CAKAnmmLjAzwVtb%3DVEaeuCtnmOLpzkJ1uJ_XiQ362YdD9B72HSg%40mail.gmail.com
|
|
This change adds a new meta-command called \syncpipeline to pgbench,
able to send a sync message without flushing using the new libpq
function PQsendPipelineSync().
This meta-command is available within a block made of \startpipeline and
\endpipeline.
Author: Anthonin Bonnefoy
Discussion: https://postgr.es/m/CAO6_XqpcNhW6LZHLF-2NpPzdTbyMm4-RVkr3+AP5cOKSm9hrWA@mail.gmail.com
|
|
When a pipeline is opened with \startpipeline and not closed, pgbench
will either error on the next transaction with a "already in pipeline
mode" error or successfully end if this was the last transaction --
despite not sending anything that was piped in the pipeline.
Make it an error to reach end of script is reached while there's an
open pipeline.
Backpatch to 14, where pgbench got support for pipelines.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reported-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/Za4IObZkDjrO4TcS@paquier.xyz
|
|
per style guidelines
Author: Peter Smith <peter.b.smith@fujitsu.com>
Discussion: https://www.postgresql.org/message-id/flat/CAHut%2BPtzstExQ4%3DvFH%2BWzZ4g4xEx2JA%3DqxussxOdxVEwJce6bw%40mail.gmail.com
|
|
Reported-by: Michael Paquier
Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz
Backpatch-through: 12
|
|
There are a lot of Perl scripts in the tree, mostly code generation
and TAP tests. Occasionally, these scripts produce warnings. These
are probably always mistakes on the developer side (true positives).
Typical examples are warnings from genbki.pl or related when you make
a mess in the catalog files during development, or warnings from tests
when they massage a config file that looks different on different
hosts, or mistakes during merges (e.g., duplicate subroutine
definitions), or just mistakes that weren't noticed because there is a
lot of output in a verbose build.
This changes all warnings into fatal errors, by replacing
use warnings;
by
use warnings FATAL => 'all';
in all Perl files.
Discussion: https://www.postgresql.org/message-id/flat/06f899fd-1826-05ab-42d6-adeb1fd5e200%40eisentraut.org
|
|
Using a scale factor large enough so as the number of rows to insert
gets larger than INT32_MAX would cause an infinite loop in
initPopulateTable(), preventing pgbench to finish its initialization.
Oversight in e35cc3b3f2d0 that has refactored the data generation logic.
Author: John Hsu
Reviewed-by: Tatsuo Ishii, Japin Li
Discussion: https://postgr.es/m/CA+-JvFvHsOafjHcuFPfkyouHNZvbOXhBNhwZxKm3WNgYz9bwzA@mail.gmail.com
|
|
It draws an unnecessary error in builds compiled without thread support.
Added by commit 038f586d5f1d, which was backpatched to 14; though in
branch master we no longer support such builds, there's no reason to
have this there, so remove it in all branches since 14.
Reported-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/ZW2G9Ix4nBKLcSSO@paquier.xyz
|
|
A PostgreSQL release tarball contains a number of prebuilt files, in
particular files produced by bison, flex, perl, and well as html and
man documentation. We have done this consistent with established
practice at the time to not require these tools for building from a
tarball. Some of these tools were hard to get, or get the right
version of, from time to time, and shipping the prebuilt output was a
convenience to users.
Now this has at least two problems:
One, we have to make the build system(s) work in two modes: Building
from a git checkout and building from a tarball. This is pretty
complicated, but it works so far for autoconf/make. It does not
currently work for meson; you can currently only build with meson from
a git checkout. Making meson builds work from a tarball seems very
difficult or impossible. One particular problem is that since meson
requires a separate build directory, we cannot make the build update
files like gram.h in the source tree. So if you were to build from a
tarball and update gram.y, you will have a gram.h in the source tree
and one in the build tree, but the way things work is that the
compiler will always use the one in the source tree. So you cannot,
for example, make any gram.y changes when building from a tarball.
This seems impossible to fix in a non-horrible way.
Second, there is increased interest nowadays in precisely tracking the
origin of software. We can reasonably track contributions into the
git tree, and users can reasonably track the path from a tarball to
packages and downloads and installs. But what happens between the git
tree and the tarball is obscure and in some cases non-reproducible.
The solution for both of these issues is to get rid of the step that
adds prebuilt files to the tarball. The tarball now only contains
what is in the git tree (*). Getting the additional build
dependencies is no longer a problem nowadays, and the complications to
keep these dual build modes working are significant. And of course we
want to get the meson build system working universally.
This commit removes the make distprep target altogether. The make
dist target continues to do its job, it just doesn't call distprep
anymore.
(*) - The tarball also contains the INSTALL file that is built at make
dist time, but not by distprep. This is unchanged for now.
The make maintainer-clean target, whose job it is to remove the
prebuilt files in addition to what make distclean does, is now just an
alias to make distprep. (In practice, it is probably obsolete given
that git clean is available.)
The following programs are now hard build requirements in configure
(they were already required by meson.build):
- bison
- flex
- perl
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/e07408d9-e5f2-d9fd-5672-f53354e9305e@eisentraut.org
|
|
Since C99, there can be a trailing comma after the last value in an
enum definition. A lot of new code has been introducing this style on
the fly. Some new patches are now taking an inconsistent approach to
this. Some add the last comma on the fly if they add a new last
value, some are trying to preserve the existing style in each place,
some are even dropping the last comma if there was one. We could
nudge this all in a consistent direction if we just add the trailing
commas everywhere once.
I omitted a few places where there was a fixed "last" value that will
always stay last. I also skipped the header files of libpq and ecpg,
in case people want to use those with older compilers. There were
also a small number of cases where the enum type wasn't used anywhere
(but the enum values were), which ended up confusing pgindent a bit,
so I left those alone.
Discussion: https://www.postgresql.org/message-id/flat/386f8c45-c8ac-4681-8add-e3b0852c1620%40eisentraut.org
|
|
Starting on 2023-08-03, this intermittently terminated a "pgbench -C"
test in CI. It could affect a high-client-count "pgbench" without "-C".
While parallel reindexdb and vacuumdb reach the same problematic check,
sufficient client count and/or connection turnover is less plausible for
them. Given the lack of examples from the buildfarm or from manual
builds, reproducing this must entail rare operating system
configurations. Also correct the associated error message, which was
wrong for non-Windows. Back-patch to v12, where the pgbench check first
appeared. While v11 vacuumdb has the problematic check, reaching it
with typical vacuumdb usage is implausible.
Reviewed by Thomas Munro.
Discussion: https://postgr.es/m/CA+hUKG+JwvTNdcyJTriy9BbtzF1veSRQ=9M_ZKFn9_LqE7Kp7Q@mail.gmail.com
|
|
Add a description of the step letters to the --help output.
Author: Gurjeet Singh <gurjeet@singh.im>
Reviewed-by: Tristen Raab <tristen.raab@highgo.ca>
Discussion: https://www.postgresql.org/message-id/flat/CABwTF4Xbc=K4tFj5Znc8jx0GCufQa577GCDsWD7=71qDnUEOyQ@mail.gmail.com
|
|
There was a mismatch between the const qualifiers for
excludeDirContents in src/backend/backup/basebackup.c and
src/bin/pg_rewind/filemap.c, which led to a quick search for similar
cases. We should make excludeDirContents match, but the rest of the
changes seem like a good idea as well.
Author: David Steele <david@pgmasters.net>
Discussion: https://www.postgresql.org/message-id/flat/669a035c-d23d-2f38-7ff0-0cb93e01d610@pgmasters.net
|
|
Per buildfarm member koel
|
|
Previously when client was aborted due to some error during
benchmarking, other clients continued their run until certain number
of transactions specified -t was reached or the time specified by -T
was expired. At the end, the results are printed with caution: "Run
was aborted; the above results are incomplete" shows.
New option "--exit-on-abort" allows pgbench to exit immediately in
this case so that users could quickly fix the cause of the failure and
try again another round of benchmarking.
Author: Yugo Nagata
Reviewed-by: Fabien COELHO, Tatsuo Ishii
Discussion: https://postgr.es/m/flat/20230804130325.df32e60879c38c92bca64207%40sraoss.co.jp
|
|
|
|
The two culprit commits are 5765cfe and 5e0c761.
Per buildfarm member koel for the first commit, while I have noticed the
second one in passing.
|
|
The buildfarm reports that this is an error on gcc (Debian 4.7.2-5)
4.7.2, 32-bit. The bug seems to be GCC bug 53119, which has obviously
been fixed for years.
Author: Tristan Partin <tristan@neon.tech>
Discussion: https://www.postgresql.org/message-id/flat/CT6HJ3U8068R.3A8SJMV02D9BC@gonk
|
|
gcc (version 13) fails to properly analyze the code due to the loop
stop condition including `l != NULL`. Let's just help it out.
Author: Tristan Partin <tristan@neon.tech>
Discussion: https://www.postgresql.org/message-id/flat/CT6HJ3U8068R.3A8SJMV02D9BC@gonk
|
|
This commit switches the client-side data generation from INSERT queries
to COPY for the two tables pgbench_branches and pgbench_tellers.
pgbench_accounts was already using COPY.
COPY is a better interface for bulk loading or high latency connections
(this point can be countered with the option for server-side data
generation, still client-side is the default), and measurements have
proved that using it for these two other tables can lead to improvements
during initialization. I did not notice slowdowns at large scale
numbers on a local setup, either, most of the work happening for the
accounts table.
Previously COPY was only used for the pgbench_accounts table because the
amount of data was much larger than the two other tables. The code is
refactored so as all three tables use the same code path to execute the
COPY queries, with a callback to build data rows.
Author: Tristan Partin
Discussion: https://postgr.es/m/CSTU5P82ONZ1.19XFUGHMXHBRY@c3po
|
|
The tables created by pgbench rely on a few assumptions for TPC-B, where
the "filler" attribute is used to comply with this benchmark's rules as
well as pgbencn historical behavior. The data generated for each table
uses this filler in a different way:
- pgbench_accounts uses it as a blank-padded empty string.
- pgbench_tellers and pgbench_branches use it as a NULL value.
There were no checks done about the consistency of the data initialized,
and this has showed up while discussing a patch that changes the logic
in charge of the client-side data generation (pgbench documents all that
already in its comments). This commit adds some checks on the data
generated for both the server-side and client-side logic.
Reviewed-by: Tristan Partin
Discussion: https://postgr.es/m/ZLik4oKnqRmVCM3t@paquier.xyz
|
|
All supported computers have either POSIX or Windows threads, and we no
longer have any automated testing of --disable-thread-safety. We define
a vestigial ENABLE_THREAD_SAFETY macro to 1 in ecpg_config.h in case it
is useful, but we no longer test it anywhere in PostgreSQL code, and
associated dead code paths are removed.
The Meson and perl-based Windows build scripts never had an equivalent
build option.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CA%2BhUKGLtmexrpMtxBRLCVePqV_dtWG-ZsEbyPrYc%2BNBB2TkNsw%40mail.gmail.com
|
|
As coded, the row data strings generated for pgbench_accounts' COPY in
the client-side data generation were always assigning 0 for one of its
attributes. This simplifies a bit an upcoming patch to switch
client-side data generation of pgbench to use COPY for the teller and
branch tables, rather than individual INSERTs.
Author: Tristan Partin
Discussion: https://postgr.es/m/CSTU5P82ONZ1.19XFUGHMXHBRY@c3po
|