| Age | Commit message (Collapse) | Author |
|
The error is caused by follow primary process. In the process
pg_rewind is executed to sync a standby with new primary. If new
primary promotion is slow, pg_rewind incorrectly returns "there's no
need to pg_rewind". Even if new standby starts, the standby causes an
error later on which make the 034.promote_node test failed.
To fix this, add CHECKPOINT in the failover script. Also add "-c"
option pg_rewind to retrieve WAL from archives so that pg_rewind could
find necessary WAL.
Reported-by: Bo Peng <pengbo@sraoss.co.jp>
Suggested-by: Bo Peng <pengbo@sraoss.co.jp>
Backpatch-through: v4.3
|
|
Replace "egrep" with "grep -E" to avoid obsolescence warnings on newer GNU grep versions.
|
|
On some platform (in my case Rocky Linux 10 running on VitualBox)
fails to finish shutdownall after test1. This could be caused by the
failover in the test to fail to restore the signal handler which
should accept shutdown signal from shutdownall. Adding "sleep 5"
before shutdownall seems to mitigate the problem.
Author: Tatsuo Ishii <ishii@postgresql.org>
Backpatch-through: v4.5
|
|
When "COPY relname TO STDOUT" is executed in the extended query
protocol mode, pgpool segfaulted.
When read_kind_from_backend() reads a message from backend, it
extracts the corresponding entry from the pending message queue when
processing extended query protocol messages. However, if the head of
the message queue is an "execute" message, some of incoming message
types are exceptional because other than CommandComplete message
(which means the execute message finishes) may come from backend. This
includes DataRow, ErrorResponse, NoticeMessage. Unfortunately we
overlooked that 'H' (CopyOutResponse) is in the group too. Thus when
CopyOutResponse comes from backend, the execute pending message is
removed. If the next message from frontend is Sync (it's often
happens), read_kind_from_backend() sets session_context->query_context
to NULL, and calls pool_unset_query_in_progress(), which accesses
session_context->query_context and segfaults.
The fix is, to add CopyOutResponse to the exception list. Just in
case, we also add 'd' (CopyData) and 'c' (CopyDone) to the list. This
may not be actually necessary since CopyData and CopyDone are
processced in CopyDataRows() though.
Add regression test case to 126.copy_hang (master and v4.7) or
076.copy_hang (v4.6 or before).
Author: Tatsuo Ishii <ishii@postgresql.org>
Reported-by: https://github.com/tetesh
Reviewed-by: Bo Peng <pengbo@sraoss.co.jp>
Discussion: https://github.com/pgpool/pgpool2/issues/133
Backpatch-through: v4.2
|
|
Previously Pgpool-II accepted "slony mode" as one of the
backend_clustering_mode to support Slony-I
(https://www.slony.info/). However the latest Slony-I was released in
2022, nearly 3 years ago at this point. And we heard nothing from
users about retiring Slony mode. This suggests that there would be no
active Slony-I mode users. So let's drop Slony-I support.
Author: Tatsuo Ishii <ishii@postgresql.org>
Reviewed-by: Bo Peng <pengbo@sraoss.co.jp>
Discussion: [pgpool-general: 9486] Retiring slony mode
https://www.pgpool.net/pipermail/pgpool-general/2025-May/009489.html
Discussion: https://www.postgresql.org/message-id/20250722.153130.1007226654125839063.ishii%40postgresql.org
|
|
Fix pgpool_setup unexpected EOF introduced in the previous commit (25ad5e46615c3ed337a8cefe7e0563d8b89ca18d).
|
|
value in the document.
|
|
To investigate recent failure of the test, add:
log_client_messages = on
|
|
PostgreSQL 18 heavily changed psql's \conninfo output format, which
made the test fail because the test relies on \conninfo. This commit
makes the test script aware the PostgreSQL version to fix the issue.
Backpatch-through: v4.2
|
|
To investigate recent failure of the test, add:
log_backend_messages = terse
|
|
This commit includes multiple fixes to compile Pgpool-II in Fedora 42,
which uses gcc 15 (C23).
- Modify pool_type.h. "bool" is now standard in C99 and
above. PostgreSQL decided to require C99 to compile it. So we follow
the way, which is just including <stdbool.h>. Also we define
TRUE/FALSE to (bool) 1 and (bool) 0 respectively. They are used only
in Windows build in PostgreSQL but we still use them in some
places. Eventually we should replace it with true/false since we do
not support Windows.
- It is now required that function pointer arguments matches the
function prototype to be called. For example:
static pid_t worker_fork_a_child(ProcessType type, void (*func) (), void *params);
should be:
static pid_t worker_fork_a_child(ProcessType type, void (*func) (void *), void *params);
Also the prototype of pool_create_relcache() is changed,
- raw_expression_tree_walker() calls walker() in many places. Now
callers of walker() should cast the first argument of it using (Node
*). We replace the call:
return walker(((RangeVar *) node)->alias, context);
with:
return WALK(((RangeVar *) node)->alias, context);
where WALK is defined as:
#define WALK(n,c) walker((Node *) (n), c)
- Note: we have lots of warnings regarding OpenSSL while compiling
Pgpool-II in Fedora42. The version used in Fedora42:
$ openssl -version
OpenSSL 3.2.4 11 Feb 2025 (Library: OpenSSL 3.2.4 11 Feb 2025)
The fix is not included in this commit. We need
to look into it in the future.
Discussion: https://github.com/pgpool/pgpool2/issues/124
Backpatch-through: v4.6
|
|
Under src/test/regression/tests we traditionally assign 001-049 for
tests for features and 050-999 for bugs. However we are running out
the range 001-049. To solve the problem, this commit renames 50 or
higher tests to 50 + 50 = 100 and higher. Also add
src/test/regression/tests/README to explain the test numbering policy.
Moreover src/test/regress.sh is modified to not misunderstand flat
files under tests is not for testing.
|
|
Commit 8ff2b9f6e mistakenly put synchronous commit parameters in
pgpool.conf. Unbreak the test by putting the parameters in
postgresql.conf. Also check if clustering mode is streaming
replication. Because that parameters causes suspends PostgreSQL if
clustering mode is other than streaming replication.
Backpatch-through: v4.6
|
|
In the test a query is sent to standby server right after rows are
inserted into primary server. Due to a replication lag, the inserted
rows could not be found on the standby in slower machines. This
commit tries to fix the issue by using synchronous replication with
remote_apply option.
Backpatch-through: v4.2
|
|
The ruby script used "File.exists", which is said to be obsoleted in
newer version of Ruby. Replace it with "File.exist".
Backpatch-through: v4.2
|
|
034.promote_node and 075.detach_primary_left_down_node set the timeout
60 seconds for finishing follow primary script. It turns out that
these timeout values are not long enough, and sometimes caused false
errors. So make them from 60 seconds to 120 seconds.
Backpath-through: v4.2
|
|
023.ssl_connection did not cover the test for ssl_ecdh_curve. This
commit tests it using bad ssl_ecdh_curve parameter to see if connection
between frontend and pgpool fails.
Author: Tatsuo Ishii <ishii@postgresql.org>
Backpatch-through: v4.2
|
|
Fix bug in commit 534e04a0e.
- If no argument is specified, watchdog_setup complains a shell script error.
- When VIP is requested, delegate_ip, if_up_cmd and if_up_cmd are
added to pgpool.conf multiple times.
Reviewed-by: Bo Peng <pengbo@sraoss.co.jp
Author: Tatsuo Ishii <ishii@postgresql.org>
|
|
This commit allows to specify VIP parameter (delegate_ip) in
watchdog_setup using new option "-vip [ip]". If ip is omitted,
'127.0.0.1' is assumed. Even if vip option is specified, pgpool will
not actually set the VIP to the system: if_up_cmd, if_down_cmd are
just set to echo command and do nothing except emit a log. This option
is intended to trace the action of Pgpool-II regarding VIP handling.
Author: Tatsuo Ishii <ishii@postgresql.org>
Discussion: https://www.postgresql.org/message-id/20250820.151646.1640218512808280876.ishii%40postgresql.org
|
|
|
|
Prior to version 4.6, the online recovery database was hardcoded to "template1".
This commit introduces a new configuration parameter, "recovery_database",
which allows users to specify the database used for online recovery.
The default value is "postgres".
|
|
Commit 04e09df17 was not enough fix. The test calls pcp_proc_info()
pgpool_adm function along with user name and password (in the test
password is the same string as user name). Problem is, the user name
is obtained from a user name that runs the test, and we use psql -a to
submit the SQL, which prints the user name. Of course the user name
can vary depending on the environment, and it makes the test fail. To
fix the issue, run psql without -a option.
|
|
The result of the test showed local host IP. Although the IP can be
either IPv4 or IPv6, the test script hadn't considered it. To fix
this, now test.sh converts IPv4 and IPv6 IP to "localhost".
|
|
This commit adds new pgpool_adm extension function:
pcp_proc_info. Also add new fields: client_host, client_port and SQL
statement to pcp_proc_info and "show pool_pools". With these additions
now it is possible to track the relationship among clients of pgpool,
pgpool itself and PostgreSQL.
Moreover the commit allows to know what commands (statements) are last
executed by using pcp_proc_info. Previously it was not possible unless
looking into the pgpool log.
lipcp.so version is bumped from 2.0 to 2.1.
|
|
When ssl_passphrase_command is not valid, the error message is
typically "bad decrypt" but it seems sometimes "wrong tag".
|
|
|
|
If enable_pool_hba = on, it's auth method is "password", no password
is registered in pool_passwd, and auth method in pg_hba.conf is
"scram-sha-256" or "md5", for the first time when a client connects to
pgpool, authentication is performed as expected. But if a client
connects to the cached connection, any password from the client is
accepted.
authenticate_frontend() asks password to the client and stores it in
frontend->password. When pgpool authenticate backend,
authenticate_frontend_SCRAM() or authenticate_frontend_md5() is called
depending on pg_hba.conf setting. authenticate_frontend_*() calls
get_auth_password() to get backend cached password but it mistakenly
returned frontend->password if pool_passwd does not have an entry for
the user. Then authenticate_frontend_*() tries to challenge based on
frontend->password. As a result, they compared frontend->password
itself, which always succeed. To fix this, when get_auth_password() is
called with reauth parameter being non 0, return backend->password.
Also if enable_pool_hba = off, in some cases a client is not asked
password for the first time, or when a client connects to cached
connection, even if it should be.
If pool_hba.conf is disabled, get_backend_connection() does not call
Client_authentication(), thus frontend->password is not set. Then
pool_do_reauth() calls do_clear_text_password(). It should have called
authenticate_frontend_clear_text() to get a password from the client,
but a mistake in a if statement prevented it. The mistake was fixed in
this commit.
Pgpool-II versions affected: v4.0 or later.
Also this commit does followings:
- Remove single PostgreSQL code path to simplify the authentication
code. As a result, following cases are no more Ok.
- Remove crypt authentication support for frontend and backend. The
feature had not been documented and never tested. Moreover crypt
authentication was removed long time ago in PostgreSQL (8.4, 2009).
- Add new regression test "040.client_auth". The test performs
exhaustive client authentication tests using a test specification
file formatted in CSV.
The csv files have 7 fields:
username: the username used for the test case
pool_hba.conf: takes "scram", "md5", "password", "pam", "ldap" or
"off". If "scram", "md5" , "password", "pam" or "ldap", the user
will have an entry in pool_hba.conf accordingly. If "off",
enable_pool_hba.conf will be off.
allow_clear_text_frontend_auth: takes "on" or "off".
pool_passwd: takes "AES", "md5" or "off". If "AES" or "md5" the
user's password will be stored in pool_passwd using ASE256 or md5
encryption method accordingly. If "off" is specified, no entry will
be created.
pg_hba.conf: almost same as pool_hba.conf except this is for
pg_hba.conf.
expected: takes "ok" or "fail". If ok, the authentication is
expected to be succeeded. If failed, the test is regarded as
failed. "fail" is opposite. The authentication is expected to be
failed. If succeeds, the test regarded as failed.
comment: arbitrary comment
By changing these fields, we can easily modify or add test
cases. The merit of this method is possible higher test
coverage. For human, it is easier to find uncovered test cases in a
table than in a program code.
Backpatch-through: v4.2
The patch was created by Tatsuo Ishii.
|
|
When a named statement is prepared, it is possible to bind then
execute without a parse message. Problem is, table oids which are
necessary to invalidate query cache at execute or COMMIT was collected
only in parse messages process (Parse()). Thus if bind is executed
without parse after previous execute, no table oids were collected,
and pgpool failed to invalidate query cache.
Fix is collecting table oids at bind time too.
Add regression test to 006.memqcache.
Problem reported by and test program provided by Achilleas Mantzios
<a.mantzios@cloud.gatewaynet.com>.
Discussion: [pgpool-general: 9427] Clarification on query results cache visibility
https://www.pgpool.net/pipermail/pgpool-general/2025-April/009430.html
Backpatch-through: v4.2
|
|
When an execute message is received, pgpool checks its max number of
rows paramter. If it's not zero, pgpool sets "partial_fetch" flag to
instruct pool_handle_query_cache() to not create query cache. Problem
is, commit 2a99aa5d1 missed that even INSERT/UPDATE/DELETE sets the
execute message parameter to non 0 (mostly 1) and pgpool set the flag
for even none SELECTs. This resulted in failing to invalidate query
cache because if the flag is true, subsequent code in
pool_handle_query_cache() skips cache invalidation. It was an
oversight in this commit (my fault):
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=2a99aa5d1910f1fd4855c8eb6751a26cbaa5e48d
To fix this change Execute() to check if the query is read only SELECT
before setting the flag.
Also add test to 006.memqcache.
Problem reported by and a test program provided by Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>.
Discussion: [pgpool-general: 9427] Clarification on query results cache visibility
https://www.pgpool.net/pipermail/pgpool-general/2025-April/009430.html
Backpatch-through: v4.2
|
|
Commit 182b65bfc allows to use multiple Unix socket directories: /tmp
and /var/run/postgresql. However if the system does not have
accessible /var/run/postgresql, pgpool_setup fails unless
$PGSOCKET_DIR is explicitly set. Instead of failing, this commit
allows pgpool_setup to skip inaccessible directories.
Backpatch-through: v4.5
|
|
Author: Bo Peng
Tested by Taiki Koshino
Backpatch-through: V4.5
|
|
Occasionally the test failed due to:
ERROR: relation "t2" does not exist
LINE 1: SELECT i, 'QUERY ID T1-1' FROM t2;
It seems the cause is that newly created table t2 takes sometime to
get replicated to standby. So insert "sleep 1" after the table
creation.
Backpatch-through: v4.2
|
|
If query cache is enabled and query is operated in extended query mode
and pgpool is running in streaming replication mode, an execute
message could return incorrect results.
This could happen when an execute message comes with a non 0 row
number parameter. In this case it fetches up to the specified number of
rows and returns "PortalSuspended" message. Pgpool-II does not create
query cache for this. But if another execute message with 0 row
number parameter comes in, it fetches rest of rows (if any) and
creates query cache with the number of rows which the execute messages
fetched.
Obviously this causes unwanted results later on: another execute
messages returns result from query cache which has only number of rows
captured by the previous execute message with limited number of rows.
Another trouble is when multiple execute messages are sent
consecutively. In this case Pgpool-II returned exactly the same
results from query cache for each execute message. This is wrong since
the second or subsequent executes should return 0 rows.
To fix this, new boolean fields "atEnd" and "partial_fetch" are
introduced in the query context. They are initialized to false when a
query context is created (also initialized when bind message is
received). If an execute message with 0 row number is executed, atEnd
is set to true upon receiving CommandComplete message. If an execute
message with non 0 row number is executed, partial_fetch is set to
true and never uses the cache result, nor creates query cache.
When atEnd is true, pgpool will return CommandComplete message with
"SELECT 0" as a result of the execute message.
Also tests for this case is added to the 006.memqcache regression
test.
Backpatch-through: v4.2
Discussion: [pgpool-hackers: 4547] Bug in query cache
https://www.pgpool.net/pipermail/pgpool-hackers/2024-December/004548.html
|
|
4dd7371c2 added test cases. SQL syntax used in the test was not
compatible with PostgreSQL 15 or earlier.
Backpatch-through: v4.2
|
|
When query cache is enabled and an execute message is sent from
frontend, pgpool injects query cache data into backend message buffer
if query cache data is available. inject_cached_message() is
responsible for the task. But it had an oversight if the message
stream from frontend includes more than one sets of bind or describe
message before a sync message. It tried to determine the frontend
message end by finding a bind complete or a row description message
from backend. But in the case, it is possible that these messages do
not indicate the message stream end because there are one more bind
complete or row description message. As a result the cached message is
inserted at inappropriate positron and pgpool mistakenly raised "kind
mismatch" error.
This commit changes the algorithm to detect the message stream end:
compare the number of messages from backend with the pending message
queue length. When a message is read from backend, the counter for the
number of message is counted up if the message is one of parse
complete, bind complete, close complete, command compete, portal
suspended or row description. For other message type the counter is
not counted up. If the counter reaches to the pending message queue
length, we are at the end of message stream and inject the cahced
messages.
Test cases for 006.memqcache are added.
Backpatch-through: v4.2.
|
|
Sometimes we see regression errors like:
2024-12-01 13:55:55.508: watchdog pid 27340: FATAL: failed to create watchdog receive socket
2024-12-01 13:55:55.508: watchdog pid 27340: DETAIL: bind on "TCP:50002" failed with reason: "Address already in use"
Before starting each regression test, we use "clean_all" script to
kill all remaining process. I suspect that this is not enough to
release bound ports. So I add netstat command to check whether some
ports are remain bound.
For not this commit is master branch only.
|
|
Commit 6d4106f9c forgot to add pgproto data which is necessary in the
test.
|
|
When enabled, log protocol messages from each backend. Possible
options are "none", "terse" and "verbose". "none" disables the feature
and is the default. "verbose" prints the log each time pgpool receives
a message from backend. "terse" is similar to verbose except it does
not print logs for repeated message to save log lines. If different
kind of message received, pgpool prints a log message including the
number of the message. One downside of "terse" is, the repeated
message will not be printed if the pgpool child process is killed
before different kind of message arrives.
For testing, 039.log_backend_messages is added.
Discussion: [pgpool-hackers: 4535] New feature: log_backend_messages
https://www.pgpool.net/pipermail/pgpool-hackers/2024-November/004536.html
|
|
In the test we check the error message when the target certificate is
revoked. Unfortunately the error message from OpenSSL seems to be
changed from v3.0 to v3.2.
v3.0 or before: "sslv3 alert certificate revoked"
v3.2: "ssl/tls alert certificate revoked"
So fix is checking only "alert certificate revoked" part.
|
|
Renew cert.sh using examples in PostgreSQL docs.
|
|
This reverts commit dd5a79aef8081bea74f9be7c4beb54ef34637ec9.
The attempt to fix 024.cert_auth regression test failure on
RockyLinux9 was not successful.
|
|
Starting from Thu, 21 Nov 2024 16:11:06 +0900, buildfarm's
024.cert_auth have started failed on RockyLinux9 regardless the
Pgpool-II versions or PostgreSQL versions. It seems at the timing the
test platform was updated from RockyLinux9.4 to RockyLinux9.5 and
openssl version was updated from 3.0 to 3.2 as well. The test firstly
revokes the frontend certificate using openssl ca -revoke, and then
generate a separate CRL file using openssl ca -gencrl command. I
suspect that openssl 3.2 now checks the revoked certificate itself and
decides that it is not valid.
Let's see how buildfarm reacts.
|
|
This is a follow up commit for commit:
ab091663b09ef8c2d0a1841921597948c597444e
Add test case using pgproto to existing 076.copy_hang.
Backpatch-through: v4.1
|
|
|
|
Commit 6b7d585eb1c693e4ffb5b8e6ed9aa0f067fa1b89 invalidates query
cache if any ALTER ROLE/USER statement is used. Actually this is an
overkill. Because following queries do not affect the privilege of the
role.
- ALTER ROLE user WITH [ENCRYPTED] PASSWORD
- ALTER ROLE user WITH CONNECTION LIMIT
So do not invalidate query cache if those commands are used.
Backpatch-through: v4.1
Discussion: https://www.pgpool.net/pipermail/pgpool-hackers/2024-October/004532.html
|
|
Buildfarm reported 006.memqcache failure. This was caused by a mistake
in the test script (test.sh). It executes pcp_invalidate_query_cache
then compares the results of a query calling current_timestamp which
is already in query cache (using /*FORCE QUERY CACHE*/ comment). Since
pcp_invalidate_query_cache just places an invalidation request and
next query processes it, comparing the result right after execution of
"SELECT current_timestamp" with the previous cached result indeed
returns an equality and the test failed. To fix this, after
pcp_invalidate_query_cache, executes a different query.
Also I found the test not only fails, but sometimes causes timeout at
my local environment. Inspecting the remaining child process showed
that it is likely the SIGINT handler was not executed (variable
exit_request was not set). I suspect this is because
pool_clear_memory_cache(), which is responsible for actually clearing
the query cache, blocks all signal including SIGINT. I think this is
the reason why the signal handler for SIGINT is not executed. Since
pool_clear_memory_cache() already uses pool_shmem_lock() to protect
the operation on query cache, the signal blocking is not necessary. In
this commit I just removed calls to POOL_SETMASK2 and POOL_SETMASK.
|
|
Previously it was not possible to invalidate query cache without
restarting pgpool. This commit adds new PCP command
"pcp_invalidate_query_cache" to invalidate query cache without
restarting pgpool. Note this command only places a query cache
invalidate request on shared the shared memory. The actual
invalidation is performed by pgpool child process.
The reasons for the PCP process cannot remove cache directly are:
1) the connection handle to memcached server is not managed by PCP
process.
2) removing shared memory query cache needs an interlock using
pool_shmem_ock() which may not work well on PCP process. Also a
function used here (pool_clear_memory_cache()) uses PG_TRY, which is
only usable in pgpool child process.
If pgpool child process finds such a request, the process invalidates
all query cache on the shared memory. If the query cache storage is
memcached, then pgpool issues memcached_flush() so that all query
cache on memcached are flushed immediately.
Note that the timing for pgpool child process to check the
invalidation request is after processing current query or response
from backend. This means that if all pgpool child process sit idle,
the request will not be processed until any of them receives a
messages from either frontend or backend.
Another note is, about query cache statistics shown by "show
pool_cache" command. Since the cache invalidation does not clear the
statistics, some of them (num_cache_hits and num_selects) continue to
increase even after the cache invalidation. Initializing the
statistics at the same could be possible but I am not sure if all
users want to do it.
Discussion:https://www.pgpool.net/pipermail/pgpool-hackers/2024-October/004525.html
|
|
Recognize /*FORCE QUERY CACHE*/ SQL statement comment so that any read
only SELECT/with queries are cached. This is opposite to /*NO QUERY
CACHE*/ comment. This feature should be used carefully. See the manual
for more details.
Discussion: https://github.com/pgpool/pgpool2/issues/56
|
|
When the query cache feature is enabled, it was possible that a user
can read rows from tables that should not be visible for the user
through query cache.
- If query cache is created for a row security enabled table for user
A, and then other user B accesses the table via SET ROLE or SET
SESSION_AUTHORIZATION in the same session, it was possible for the
user B to retrieve rows which should not be visible from the user B.
- If query cache is created for a table for user A, and then other
user B accesses the table via SET ROLE or SET SESSION_AUTHORIZATION
in the same session, it was possible for the user B to retrieve rows
which should not be visible from the user B.
- If query cache is created for a table for a user, and then the
access right of the table is revoked from the user by REVOKE
command, still it was possible for the user to to retrieve the rows
through the query cache.
Besides the vulnerabilities, there were multiple bugs with the query
cache feature.
- If query cache is created for a row security enabled table for a
user, and then ALTER DATABASE BYPASSRLS or ALTER ROLE BYPASSRLS
disable the row security of the table, subsequent SELECT still
returns the same rows as before through the query cache.
- If query cache is created for a table for a user, and then ALTER
TABLE SET SCHEMA changes the search path to not allow to access the
table, subsequent SELECT still returns the rows as before through
the query cache.
To fix above, following changes are made:
- Do not allow to create query cache/use query cache for row security
enabled tables (even if the table is included in
cache_safe_memqcache_table_list).
- Do not allow to create query cache/use query cache if SET ROLE/SET
AUTHORIZATION is executed in the session (query cache invalidation
is performed when a table is modified as usual).
- Remove entire query cache if REVOKE/ALTER DATABASE/ALTER TABLE/ALTER
ROLE is executed. If the command is executed in an explicit
transaction, do not create query cache/use query cache until the
transaction gets committed (query cache invalidation is performed
when a table is modified as usual). If the transaction is aborted,
do not remove query cache.
Patch is created by Tatsuo Ishii.
Backpatch-through: v4.1
|
|
Currently the only way to trigger log rotation in logging collector process
is to send SIGUSR1 signal directly to logging collector process.
However, I think it would be nice to have a better way to do it with an external
tool (e.g. logrotate) without requiring knowledge of the logging collector's PID.
This commit adds a new PCP command "pcp_log_rotate" for triggering log rotation.
|