Age | Commit message (Collapse) | Author |
|
insert_lock() forgot to send row lock command (lock_kind == 3 case) to
other than main node.
|
|
When INSERT command is received, pgpool automatically issues table
LOCK command to the target table but it forgot to send the command to
other than main nodes. This only happened in extended query mode.
This commit fixes the bug.
Discussion: GitHub issue #69.
https://github.com/pgpool/pgpool2/issues/69
Backpatch-through: v4.1
|
|
6fdba5c33
These leaks were brought in by commit 6fdba5c33 " Use psprintf()
instead of snprintf()." Since the commit was backpatched through 4.1,
this needs to be backpatched through 4.1 too.
Per Coverity (CID 1559736).
Backpatch-through: 4.1.
|
|
These leaks were mostly brought in by commit 65dbbe7a0 "Add IPv6
support for hostname and heartbeat_hostname parameter." Since the
commit was only for master branch, no backpatch is necessary.
Per Coverity (CID 1559737 and CID 1559734).
|
|
When the query cache feature is enabled, it was possible that a user
can read rows from tables that should not be visible for the user
through query cache.
- If query cache is created for a row security enabled table for user
A, and then other user B accesses the table via SET ROLE or SET
SESSION_AUTHORIZATION in the same session, it was possible for the
user B to retrieve rows which should not be visible from the user B.
- If query cache is created for a table for user A, and then other
user B accesses the table via SET ROLE or SET SESSION_AUTHORIZATION
in the same session, it was possible for the user B to retrieve rows
which should not be visible from the user B.
- If query cache is created for a table for a user, and then the
access right of the table is revoked from the user by REVOKE
command, still it was possible for the user to to retrieve the rows
through the query cache.
Besides the vulnerabilities, there were multiple bugs with the query
cache feature.
- If query cache is created for a row security enabled table for a
user, and then ALTER DATABASE BYPASSRLS or ALTER ROLE BYPASSRLS
disable the row security of the table, subsequent SELECT still
returns the same rows as before through the query cache.
- If query cache is created for a table for a user, and then ALTER
TABLE SET SCHEMA changes the search path to not allow to access the
table, subsequent SELECT still returns the rows as before through
the query cache.
To fix above, following changes are made:
- Do not allow to create query cache/use query cache for row security
enabled tables (even if the table is included in
cache_safe_memqcache_table_list).
- Do not allow to create query cache/use query cache if SET ROLE/SET
AUTHORIZATION is executed in the session (query cache invalidation
is performed when a table is modified as usual).
- Remove entire query cache if REVOKE/ALTER DATABASE/ALTER TABLE/ALTER
ROLE is executed. If the command is executed in an explicit
transaction, do not create query cache/use query cache until the
transaction gets committed (query cache invalidation is performed
when a table is modified as usual). If the transaction is aborted,
do not remove query cache.
Patch is created by Tatsuo Ishii.
Backpatch-through: v4.1
|
|
Now these watchdog configuration parameters accept IPv6 IP address.
Author: Kwangwon Seo
Reviewed-by: Muhammad Usama, Tatsuo Ishii
Discussion: [pgpool-hackers: 4476] Watchdog and IPv6
https://www.pgpool.net/pipermail/pgpool-hackers/2024-July/004477.html
|
|
This is a follow up commit for 181d300de6337fe9a10b60ddbd782aa886b563e9.
If previous query produces parameter status message, subsequent
parse() needs to read and process it because it wants to read Ready
for query message which is supposed to follow the parameter status
message. However when ParameterStatus() gets called, the query in
progress flag was set and it was possible that only one of parameter
status message from backend was processed if the query processed in
this parse() call is load balanced. It is likely that the parameter
status message comes from all live backend because they are generated
by SET command, and SET command are sent to all live backend in
replication mode and snapshot isolation mode. So unset the query in
progress flag before calling ParameterStatus().
Here is the test case written in pgproto data format.
'P' "" "SET application_name TO foo"
'B' "" "" 0 0 0
'E' "" 0
'P' "" "SELECT 1"
'B' "" "" 0 0 0
'E' "" 0
'P' "" "SET application_name TO bar"
'B' "" "" 0 0 0
'E' "" 0
'S'
'Y'
'X'
Backpatch-through: v4.1.
|
|
In replication mode and snapshot isolation mode when a command fishes,
pgpool waits for a ready for query message but forgot that some
commands (for example SET ROLE) produces a parameter status
message. As a result pgpool errors out that other message arrives
before the ready for query message. Deal with the case when a
parameter status message arrives.
Here is the test case written in pgproto data format.
'P' "" "SET ROLE TO foo"
'B' "" "" 0 0 0
'E' "" 0
'P' "" "SELECT 1"
'B' "" "" 0 0 0
'E' "" 0
'S'
'Y'
Backpatch-through: v4.1.
|
|
Currently the default values of *_user parameters are "nobody".
This commit changes the default value of *_user parameters to ''.
|
|
The following log messages appear when a child process exits due to settings (e.g., child_life_time or child_max_connections) .
Downgrade them to DEBUG1 because they are normal messages.
reaper handler
reaper handler: exiting normally
|
|
Currently the only way to trigger log rotation in logging collector process
is to send SIGUSR1 signal directly to logging collector process.
However, I think it would be nice to have a better way to do it with an external
tool (e.g. logrotate) without requiring knowledge of the logging collector's PID.
This commit adds a new PCP command "pcp_log_rotate" for triggering log rotation.
|
|
|
|
|
|
|
|
Previously fixed size buffers were used for snprintf in the file. It's
not appropriate to use snprintf here because the result string could
exceed the buffer size and it could lead to incomplete command or path
used after.
Backpatch-through: 4.1.
|
|
Use "psql -V" instead of "initdb -V" in the sample scripts
bacause in some cases postgresqlxx-server may not be installed.
|
|
Previously pgpool could hang after a flush message arrives. Consider
following scenario:
(1) backend sends a portal suspend message.
(2) pgool write it in the frontend write buffer. But not flush it.
(3) The frontend sends a flush message to pgpool.
(4) pgpool fowards the flush message to backend.
(5) Since there's no pending message in backend, nothing happen.
(6) The frontend waits for the portal suspend message from pgpool in vain.
To fix this, at (4) pgpool flushes data in the frontend write buffer
if some data remains (in this case the portal suspend message). Then
the frontend will send next request message to pgpool.
Discussion: https://github.com/pgpool/pgpool2/issues/59
Backpatch-through: master, 4.5, 4.4, 4.3, 4.2 and 4.1.
|
|
Remove dead code surrounded by "#ifdef NOT_USED".
|
|
It is reported that pgpool child segfaulted in pool_do_auth. The cause
was MAIN_CONNECTION() returns NULL. It seems my_main_node_id was set
to incorrect node id 0, which was actually in down status. thus there
was no connection in cp->slots[0]. In this particular case a client
connected to pgpool while failover occurred in another pgpool node,
and it was propagated by watchdog, which changed backend_status in
shared memory. new_connection() properly updates my_backend_status but
it forgot to update my_main_node_id, and MAIN_CONNECTION returned
incorrect backend id.
Problem reported by: Emond Papegaaij
Discussion: [pgpool-general: 9175] Segmentation fault
https://www.pgpool.net/pipermail/pgpool-general/2024-July/001852.html
Backpatch-through: V4.1.
|
|
Calculation of pooled_connection, which is used by the process
eviction algorithm, was not correct. The number always resulted in
max_pool. Also more comments are added.
Discussion: [pgpool-hackers: 4490] Issue with dynamic process management
https://www.pgpool.net/pipermail/pgpool-hackers/2024-July/004491.html
Backpatch-through: master, 4.5, 4.4
|
|
We often see a timeout error in the buildfarm test. Analyzing the
buildfarm log shows:
2024-07-10 03:41:31.044: watchdog pid 29119: FATAL: failed to create watchdog receive socket
2024-07-10 03:41:31.044: watchdog pid 29119: DETAIL: bind on "TCP:50010" failed with reason: "Address already in use"
I suspect there's something wrong in watchdog shutdown process. To
confirm my theory, add sh command to show all process named "pgpool"
at the end of each test cycle.
|
|
This commit fixed a segmentation fault that occurs when parsing pgpool.conf
if the setting value was not enclosed in single quotes.
The patch is created by Carlos Chapi, reviewed and modified by Tatsuo Ishii.
|
|
Some functions (close_idle_connection(), new_connection() and
pool_create_cp()) used MAIN* and VALID_BACKEND where they are not
appropriate. MAIN* and VALID_BACKEND are only useful against current
connections to backend, not for pooled connections since in pooled
connections which backend is the main node or up and running is
necessarily same as the current connections to backend.
The misuses of those macros sometimes leads to segfault.
This patch introduces new in_use_backend_id() which returns the fist
node id in use. This commit replaces some of MAIN* with the return
value from in_use_backend_id(). Also inappropriate calls to
VALID_BACKEND are replaced with CONNECTION_SLOT macro.
Problem reported by Emond Papegaaij
Discussion: https://www.pgpool.net/pipermail/pgpool-general/2024-June/009176.html
[pgpool-general: 9114] Re: Another segmentation fault
Backpatch-through: V4.1
|
|
The macro used to REAL_MAIN_NODE_ID if there's no session context.
This is wrong since REAL_MAIN_NODE_ID can be changed any time when
failover/failback happens. Suppose REAL_MAIN_NODE_ID ==
my_main_node_id == 1. Then due to failback, REAL_MAIN_NODE_ID is
changed to 0. Then MAIN_CONNECTION(cp) will return NULL and any
reference to it will cause segmentation fault. To prevent the issue we
should return my_main_node_id instead.
Discussion: https://www.pgpool.net/pipermail/pgpool-general/2024-June/009205.html
Backpatch-through: V4.1
|
|
|
|
processes_reporting() accidentaly called both send_row_description()
and send_row_description_and_data_rows().
Discussion: https://www.pgpool.net/pipermail/pgpool-hackers/2024-June/004472.html
[pgpool-hackers: 4471] [PATCH] printing empty row first in query "show pool_process"
Author: Kwangwon Seo
Back patch to V4.2 where the problem started.
|
|
When pending messages are created, Pgpool-II did like:
(1) pmsg = pool_pending_message_create(); /* create a pending message */
(2) pool_pending_message_dest_set(pmsg, query_context) /* set PostgreSQL node ids to be sent */
(3) pool_pending_message_query_set(pmsg, query_context); /* add query context */
(4) pool_pending_message_add(pmsg); /* add the pending message to the list */
(5) pool_pending_message_free_pending_message(pmsg); /* free memory allocated by pool_pending_message_create();
The reason why pool_pending_message_free_pending_message(pmsg) is
called here is, pool_pending_message_add() creates a copy of the
pending message then add it to the list. This commit modifies
pool_pending_message_add() so that it does not create a copy of the
object and adds it to the pending messages list. This way, we can
eliminate (5) as well and it should reduce memory footprint and CPU
cycle.
|
|
It is reported that pgpool child segfaulted [1].
[snip]
In the down thread it is reported that despite VALID_BACKEND(i)
returns true, backend->slots[i] is NULL, which should have been filled
by new_connection().
It seems there's a race condition. In new_connection(), there's a code
fragment:
/*
* Make sure that the global backend status in the shared memory
* agrees the local status checked by VALID_BACKEND. It is possible
* that the local status is up, while the global status has been
* changed to down by failover.
*/
A--> if (BACKEND_INFO(i).backend_status != CON_UP &&
BACKEND_INFO(i).backend_status != CON_CONNECT_WAIT)
{
ereport(DEBUG1,
(errmsg("creating new connection to backend"),
errdetail("skipping backend slot %d because global backend_status = %d",
i, BACKEND_INFO(i).backend_status)));
/* sync local status with global status */
B--> *(my_backend_status[i]) = BACKEND_INFO(i).backend_status;
continue;
}
It is possible that at A backend_status in the shared memory is down
but by the time it reaches B the status has been changed to up. And
new_connection() skipped to create a backend connection. This seems to
explain why the connection slot is NULL while VALID_BACKEND returns
true. To prevent the race condtion, backend_status in shared memory is
copied to a local variable and evaluate it. Also the VALID_BACKEND
just before:
pool_set_db_node_id(CONNECTION(backend, i), i);
is changed to:
if (VALID_BACKEND(i) && CONNECTION_SLOT(backend, i))
so that it prevents crash just in case.
[1] [pgpool-general: 9104] Another segmentation fault
|
|
With network monitoring enabled, a Pgpool node would shut down immediately if it
lost all network interfaces or assigned IP addresses, providing extra protection
by quickly removing a non-communicative node from the cluster.
The issue was that Pgpool responded to network blackout events even when network
monitoring was disabled. This fix ensures that the network monitoring socket is
not opened when network monitoring is not enabled, preventing unnecessary shutdowns.
|
|
[pgpool-hackers: 4465] abnormal behavior about PGPOOL RESET. and proposal a patch file.
reported that "pgpool reset" command fails if watchdog is enabled.
test=# PGPOOL RESET client_idle_limit;
SET
ERROR: Pgpool node id file �y/pgpool_node_id does not exist
DETAIL: If watchdog is enable, pgpool_node_id file is required
message type 0x5a arrived from server while idle
message type 0x43 arrived from server while idle
message type 0x5a arrived from server while idle
SetPgpoolNodeId() tried to obtain the path to the node id file by
using global variable config_file_dir and failed because it points to
an automatic variable in ParseConfigFile().
To fix this, change the config_file_dir from a pointer to an array and
save the path string into config_file_dir in ParseConfigFile().
Also regression test is added to 004.watchdog.
Bug reported and problem analysis by keiseo.
Back patch to V4.2 in which the node id file was introduced.
|
|
Author: Umar Hayat
|
|
It was reported that psql_scan crashes while determining whether a
string in a long query is psql variable (i.e. starting with ":") or
not.
https://github.com/pgpool/pgpool2/issues/54
This is because callback struct were not provided while calling
psql_scan_create(). Later psql_scan() tries to invoke a callback and
crashes because the pointer to the callback struct is NULL. To fix
this, provide PsqlScanCallbacks struct with a NULL pointer inside to
the callback function. With this, psql_scan() avoids to invoke a
callback.
Backpatch to master, V4.5, V4.4, V4.3, V4.2 and V4.1 where psql_scan
was introduced.
|
|
https://github.com/pgpool/pgpool2/issues/52
|
|
If the string list type configuration parameters (e.g. unix_socket_directories, pcp_socket_dir, etc.) contain white spaces, it may cause startup failure.
|
|
- Add missing header files in autoconf check and
- Add LDAP_DEPRECATED to include prototypes for deprecated ldap functions
Patch is created by Vladimir Petko.
|
|
Commit 0b94cd9f caused a gcc warning:
streaming_replication/pool_worker_child.c: In function 'do_worker_child':
streaming_replication/pool_worker_child.c:281:40: warning: 'watchdog_leader' may be used uninitialized in this function [-Wmaybe-uninitialized]
if (!pool_config->use_watchdog ||
^
It seems this only occures in older gcc (e.g. gcc 4.8.5).
Backpatch-thtrough: master branch only as commit 0b94cd9f only applied to master.
|
|
The permission of /etc/sudoers.d/pgpool should be mode 0440.
|
|
This is a follow up commit for 0564864e "Fix assorted causes of
segmentation fault.". It lacked the fix while verify_backend_node calls
get_server_version, i.e. checking availability of slots.
Patch provided by: Emond Papegaaij
Backpatch-through: v4.4
Discussion:
[pgpool-general: 9072] Re: Segmentation after switchover
https://www.pgpool.net/pipermail/pgpool-general/2024-April/009133.html
|
|
It is reported that pgpool and its child process segfault in certain
cases when failover involved.
In pgpool main get_query_result (called from find_primary_node) crashed.
do_query(slots[backend_id]->con, query, res, PROTO_MAJOR_V3);
It seems slots[0] is NULL here. slots[0] is created by
make_persistent_db_connection_noerror() but it failed with log
message: "find_primary_node: make_persistent_db_connection_noerror
failed on node 0". Note that at the time when
make_persistent_db_connection_noerror() is called, VALID_BACKEND
reported that node 0 is up. This means that failover is ongoing and
the node status used by VALID_BACKEND did not catch up. As a result
get_query_user is called with slots[0] = NULL, which caused the
segfault. Fix is, check slots entry before calling
get_query_result.
Also health check has an issue with connection "slot" memory. It is
managed by HealthCheckMemoryContext. slot is the pointer to the
memory. When elog(ERROR) is raised, pgpool long jumps and resets the
memory context. Thus, slot remains as a pointer to freed memory. To
fix this, always set NULL to slot right after the
HealthCheckMemoryContext call.
Similar issue is found with streaming replication check too and is
also fixed in this commit.
Problem reported and analyzed: Emond Papegaaij
Backpatch-through: v4.4
Discussion:
[pgpool-general: 9070] Re: Segmentation after switchover
https://www.pgpool.net/pipermail/pgpool-general/2024-April/009131.html
|
|
The test script forgot to execute shutdownall before exiting.
|
|
It was reported that valgrind found several errors including an
uninitialized memory error in read_startup_packet. It allocates memory
for user name in a startup packet in case cancel or SSL request using
palloc, and later on the memory is used by pstrdup. Since memory
allocated by palloc is undefined, this should have been palloc0.
Bug reported by: Emond Papegaaij
Backpatch-through: v4.1
Discussion:
[pgpool-general: 9065] Re: Segmentation after switchover
https://www.pgpool.net/pipermail/pgpool-general/2024-April/009126.html
|
|
|
|
Commit: 3f3c1656 Fix statement_level_load_balance with BEGIN etc.
brought errors/hung up when load_balance_mode is off, primary node id
is not 0 and queries are BEGIN etc.
pool_setall_node_to_be_sent() checked if the node is primary. If not,
just returned with empty where_to_send map which makes
set_vrtual_main_node() not to set
query_context->virtual_main_node_id. As a result, MAIN_NODE macro
(it's actually pool_virtual_main_db_node_id()) returns
REAL_MAIN_NODE_ID, which is 0 if node 0 is alive (this should have
been primary node id).
Following simple test reveals the bug.
(1) create a two-node cluster using pgpool_setup
(2) shutdown node 0 and recover node 1 (pcp_recovery_node 0). This
makes node 0 to be standby, node 1 to be primary.
(3) add followings to pgpool.conf and restart whole cluster.
load_balance_mode = off
backend_weight1 = 0
(4) type "begin" from psql. It gets stuck.
Bug found and analyzed by Emond Papegaaij.
Discussion: https://www.pgpool.net/pipermail/pgpool-general/2024-March/009113.html
Backpatch-through: v4.1
|
|
https://github.com/pgpool/pgpool2/issues/42 reported that with CFLAGS
-flto=4 -Werror=odr -Werror=lto-type-mismatch -Werror=strict-aliasing
gcc emits errors. Some of them are mistakes when their sources were
brought in from PostgreSQL. This commit fixes them. Note that I was
not able to suppress some errors at least with my gcc (9.4.0). This
may be because gcc bug (false positives) or just a bug with the old
gcc, I don't know at this point. Maybe someday revisit this.
Discussion:
[pgpool-hackers: 4442] Fixing GitHub issue 42
https://www.pgpool.net/pipermail/pgpool-hackers/2024-March/004443.html
../src/include/query_cache/pool_memqcache.h:251:20: warning: type of 'pool_fetch_from_memory_cache' does not match original declaration [-Wlto-type-mismatch]
251 | extern POOL_STATUS pool_fetch_from_memory_cache(POOL_CONNECTION * frontend,
| ^
query_cache/pool_memqcache.c:731:1: note: 'pool_fetch_from_memory_cache' was previously declared here
731 | pool_fetch_from_memory_cache(POOL_CONNECTION * frontend,
| ^
query_cache/pool_memqcache.c:731:1: note: code may be misoptimized unless '-fno-strict-aliasing' is used
../src/include/utils/palloc.h:64:22: warning: type of 'CurrentMemoryContext' does not match original declaration [-Wlto-type-mismatch]
64 | extern MemoryContext CurrentMemoryContext;
| ^
../../src/utils/mmgr/mcxt.c:40:15: note: 'CurrentMemoryContext' was previously declared here
../../src/utils/mmgr/mcxt.c:40:15: note: code may be misoptimized unless '-fno-strict-aliasing' is used
../src/include/utils/memutils.h:55:22: warning: type of 'TopMemoryContext' does not match original declaration [-Wlto-type-mismatch]
55 | extern MemoryContext TopMemoryContext;
| ^
../../src/utils/mmgr/mcxt.c:46:15: note: 'TopMemoryContext' was previously declared here
../../src/utils/mmgr/mcxt.c:46:15: note: code may be misoptimized unless '-fno-strict-aliasing' is used
../src/include/pool_config.h:646:22: warning: type of 'pool_config' does not match original declaration [-Wlto-type-mismatch]
646 | extern POOL_CONFIG * pool_config;
| ^
config/pool_config.l:46:14: note: 'pool_config' was previously declared here
46 | POOL_CONFIG *pool_config = &g_pool_config; /* for legacy reason pointer to the above struct */
| ^
config/pool_config.l:46:14: note: code may be misoptimized unless '-fno-strict-aliasing' is used
|
|
- The comment for sr_check_period. The default value should be 10 seconds.
- Also fixed some typos in comments.
Patch is created by hiroin and modified by Bo Peng.
|
|
Remove Makefile.in etc. generated by autoconf.
Create .gitignore under src/config and add generated files by bison and flex.
|
|
Commit 240c668d "Guard against inappropriate protocol data." caused
reset queries fail if extended query messages do not end. This commit
fix that by checking whether we are running reset queries in
SimpleQuery(). Also add the test case for this.
|
|
Fix warning introduced in the previous commit.
|
|
|
|
|