summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc.ja/src/sgml/connection-pooling.sgml2
-rw-r--r--doc.ja/src/sgml/connection-settings.sgml24
-rw-r--r--doc.ja/src/sgml/example-Aurora.sgml4
-rw-r--r--doc.ja/src/sgml/example-cluster.sgml12
-rw-r--r--doc.ja/src/sgml/example-replication-si-mode.sgml4
-rw-r--r--doc.ja/src/sgml/loadbalance.sgml2
-rw-r--r--doc.ja/src/sgml/memcache.sgml10
-rw-r--r--doc.ja/src/sgml/stream-check.sgml8
-rw-r--r--doc.ja/src/sgml/watchdog.sgml10
-rw-r--r--doc/src/sgml/connection-pooling.sgml2
-rw-r--r--doc/src/sgml/connection-settings.sgml24
-rw-r--r--doc/src/sgml/example-Aurora.sgml2
-rw-r--r--doc/src/sgml/example-cluster.sgml12
-rw-r--r--doc/src/sgml/example-replication-si-mode.sgml4
-rw-r--r--doc/src/sgml/loadbalance.sgml2
-rw-r--r--doc/src/sgml/memcache.sgml8
-rw-r--r--doc/src/sgml/stream-check.sgml8
-rw-r--r--doc/src/sgml/watchdog.sgml6
-rw-r--r--src/auth/pool_auth.c85
-rw-r--r--src/context/pool_query_context.c8
-rw-r--r--src/include/pcp/libpcp_ext.h8
-rw-r--r--src/include/pool.h11
-rw-r--r--src/include/protocol/pool_process_query.h4
-rw-r--r--src/include/protocol/pool_proto_modules.h3
-rw-r--r--src/protocol/child.c70
-rw-r--r--src/protocol/pool_process_query.c39
-rw-r--r--src/protocol/pool_proto_modules.c2
-rw-r--r--src/rewrite/pool_lobj.c4
-rw-r--r--src/sample/pgpool.conf.sample-stream42
-rw-r--r--src/tools/pcp/pcp_frontend_client.c2
30 files changed, 245 insertions, 177 deletions
diff --git a/doc.ja/src/sgml/connection-pooling.sgml b/doc.ja/src/sgml/connection-pooling.sgml
index 50b7a23c6..e9c8253c7 100644
--- a/doc.ja/src/sgml/connection-pooling.sgml
+++ b/doc.ja/src/sgml/connection-pooling.sgml
@@ -1214,7 +1214,7 @@ local0.* /var/log/pgpool.log
</varlistentry>
<varlistentry id="guc-log-backend-messages" xreflabel="log_backend_messages">
- <term><varname>log_backend_messages</varname> (<type>boolean</type>)
+ <term><varname>log_backend_messages</varname> (<type>enum</type>)
<indexterm>
<!--
<primary><varname>log_backend_messages</varname> configuration parameter</primary>
diff --git a/doc.ja/src/sgml/connection-settings.sgml b/doc.ja/src/sgml/connection-settings.sgml
index f748f4cb4..0a39e05ff 100644
--- a/doc.ja/src/sgml/connection-settings.sgml
+++ b/doc.ja/src/sgml/connection-settings.sgml
@@ -976,9 +976,9 @@
<para>
このモードはもっともよく使われており、推薦できるクラスタリングモードです。
このモードでは<productname>PostgreSQL</productname>が個々のサーバをレプリケーションします。
- このモードを有効にするには<varname>backend_clustering_mode</varname>に'streaming_replication'を設定してください。
+ このモードを有効にするには<varname>backend_clustering_mode</varname>にstreaming_replicationを設定してください。
<programlisting>
-backend_clustering_mode = 'streaming_replication'
+backend_clustering_mode = streaming_replication
</programlisting>
このモードでは127台までのストリーミングレプリケーションスタンバイサーバを使用できます。
また、スタンバイサーバをまったく使用しないことも可能です。
@@ -1004,9 +1004,9 @@ backend_clustering_mode = 'streaming_replication'
between <productname>PostgreSQL</> backends.
-->
このモードでは<productname>PostgreSQL</>間のデータレプリケーションを<productname>Pgpool-II</productname>に行わせます。
- このモードを有効にするには<varname>backend_clustering_mode</varname>に'native_replication'を設定してください。
+ このモードを有効にするには<varname>backend_clustering_mode</varname>にnative_replicationを設定してください。
<programlisting>
-backend_clustering_mode = 'native_replication'
+backend_clustering_mode = native_replication
</programlisting>
このモードでは127台までのレプリカサーバを使用できます。
また、レプリカサーバをまったく使用しないことも可能です。
@@ -1575,9 +1575,9 @@ backend_clustering_mode = 'native_replication'
<para>
このモードは、ネイティブレプリケーションモードと似ていますが、更にノードをまたがる可視性の一貫性を保証します。
実装は研究論文<xref linkend="mishima2009">に基づいています。
- このモードを有効にするには<varname>backend_clustering_mode</varname>に'snapshot_isolation'を設定してください。
+ このモードを有効にするには<varname>backend_clustering_mode</varname>にsnapshot_isolationを設定してください。
<programlisting>
-backend_clustering_mode = 'snapshot_isolation'
+backend_clustering_mode = snapshot_isolation
</programlisting>
たとえば、以下のようなノードにまたがる可視性の一貫性がないことからくるノード間のデータ不整合を防ぐことができます。
ここで、S1, S2はセッションを表し、N1, N2はPostgreSQLのサーバ1と2を表します。
@@ -1646,9 +1646,9 @@ S2/N2: COMMIT;
<para>
このモードでは<productname>PostgreSQL</productname>が個々のサーバをレプリケーションします。
- このモードを有効にするには<varname>backend_clustering_mode</varname>に'logical_replication'を設定してください。
+ このモードを有効にするには<varname>backend_clustering_mode</varname>にlogical_replicationを設定してください。
<programlisting>
-backend_clustering_mode = 'logical_replication'
+backend_clustering_mode = logical_replication
</programlisting>
このモードでは127台までのロジカルレプリケーションサーバを使用できます。
また、スタンバイサーバをまったく使用しないことも可能です。
@@ -1666,9 +1666,9 @@ backend_clustering_mode = 'logical_replication'
<para>
このモードでは<productname>Pgpool-II</productname>を<acronym>Slony-I</acronym>と組み合わせて使用します。
Slony-Iが実際にデータのレプリケーションを行います。
- このモードを有効にするには<varname>backend_clustering_mode</varname>に'slony'を設定してください。
+ このモードを有効にするには<varname>backend_clustering_mode</varname>にslonyを設定してください。
<programlisting>
-backend_clustering_mode = 'slony'
+backend_clustering_mode = slony
</programlisting>
このモードでは127台までのレプリカサーバを使用できます。
また、レプリカサーバをまったく使用しないことも可能です。
@@ -1689,9 +1689,9 @@ backend_clustering_mode = 'slony'
このモードでは、<productname>Pgpool-II</>はデータベースの同期に関しては関与しません。
システム全体に意味の有る動作をさせるのはユーザの責任となります。
このモードでは負荷分散は<emphasis>できません</emphasis>。
- このモードを有効にするには<varname>backend_clustering_mode</varname>に'raw'を設定してください。
+ このモードを有効にするには<varname>backend_clustering_mode</varname>にrawを設定してください。
<programlisting>
-backend_clustering_mode = 'raw'
+backend_clustering_mode = raw
</programlisting>
</para>
</sect2>
diff --git a/doc.ja/src/sgml/example-Aurora.sgml b/doc.ja/src/sgml/example-Aurora.sgml
index 10f1ae717..569d6f1dc 100644
--- a/doc.ja/src/sgml/example-Aurora.sgml
+++ b/doc.ja/src/sgml/example-Aurora.sgml
@@ -41,13 +41,13 @@
from <filename>pgpool.conf.sample</filename>.
Make sure your <filename>pgpool.conf</filename> includes following line:
<programlisting>
-backend_clustering_mode = 'streaming_replication'
+backend_clustering_mode = streaming_replication
</programlisting>
-->
<filename>pgpool.conf.sample</filename>をコピーして<filename>pgpool.conf</filename>を作ります。
以下の行が<filename>pgpool.conf</filename>に含まれていることを確認してください。
<programlisting>
-backend_clustering_mode = 'streaming_replication'
+backend_clustering_mode = streaming_replication
</programlisting>
</para>
</listitem>
diff --git a/doc.ja/src/sgml/example-cluster.sgml b/doc.ja/src/sgml/example-cluster.sgml
index 809f1f050..a9fa53135 100644
--- a/doc.ja/src/sgml/example-cluster.sgml
+++ b/doc.ja/src/sgml/example-cluster.sgml
@@ -188,25 +188,25 @@
<tbody>
<row>
<entry morerows='1'>自動フェイルオーバ</entry>
- <entry><ulink url="https://git.postgresql.org/gitweb/?p=pgpool2.git;a=blob_plain;f=src/sample/scripts/failover.sh.sample;hb=refs/heads/V4_6_STABLE">/etc/pgpool-II/sample_scripts/failover.sh.sample</ulink></entry>
+ <entry><ulink url="https://raw.githubusercontent.com/pgpool/pgpool2/refs/heads/master/src/sample/scripts/failover.sh.sample">/etc/pgpool-II/sample_scripts/failover.sh.sample</ulink></entry>
<entry>フェイルオーバを実行するスクリプト。<xref linkend="GUC-FAILOVER-COMMAND">で使用します。</entry>
</row>
<row>
- <entry><ulink url="https://git.postgresql.org/gitweb/?p=pgpool2.git;a=blob_plain;f=src/sample/scripts/follow_primary.sh.sample;hb=refs/heads/V4_6_STABLE">/etc/pgpool-II/sample_scripts/follow_primary.sh.sample</ulink></entry>
+ <entry><ulink url="https://raw.githubusercontent.com/pgpool/pgpool2/refs/heads/master/src/sample/scripts/follow_primary.sh.sample">/etc/pgpool-II/sample_scripts/follow_primary.sh.sample</ulink></entry>
<entry>上記フェイルオーバスクリプトが実行された後に、新しいプライマリサーバとスタンバイサーバを同期させるスクリプト。<xref linkend="GUC-FOLLOW-PRIMARY-COMMAND">で使用します。 PostgreSQLサーバが2台の場合はこのスクリプトの設定は不要です。</entry>
</row>
<row>
<entry morerows='1'>オンラインリカバリ</entry>
- <entry><ulink url="https://git.postgresql.org/gitweb/?p=pgpool2.git;a=blob_plain;f=src/sample/scripts/recovery_1st_stage.sample;hb=refs/heads/V4_6_STABLE">/etc/pgpool-II/sample_scripts/recovery_1st_stage.sample</ulink></entry>
+ <entry><ulink url="https://raw.githubusercontent.com/pgpool/pgpool2/refs/heads/master/src/sample/scripts/recovery_1st_stage.sample">/etc/pgpool-II/sample_scripts/recovery_1st_stage.sample</ulink></entry>
<entry>スタンバイサーバをリカバリするスクリプト。<xref linkend="GUC-RECOVERY-1ST-STAGE-COMMAND">で使用します。</entry>
</row>
<row>
- <entry><ulink url="https://git.postgresql.org/gitweb/?p=pgpool2.git;a=blob_plain;f=src/sample/scripts/pgpool_remote_start.sample;hb=refs/heads/V4_6_STABLE">/etc/pgpool-II/sample_scripts/pgpool_remote_start.sample</ulink></entry>
+ <entry><ulink url="https://raw.githubusercontent.com/pgpool/pgpool2/refs/heads/master/src/sample/scripts/pgpool_remote_start.sample">/etc/pgpool-II/sample_scripts/pgpool_remote_start.sample</ulink></entry>
<entry>上記<xref linkend="GUC-RECOVERY-1ST-STAGE-COMMAND">が実行された後に、スタンバイノードを起動させるスクリプト。</entry>
</row>
<row>
<entry morerows='1'>Watchdog</entry>
- <entry><ulink url="https://git.postgresql.org/gitweb/?p=pgpool2.git;a=blob_plain;f=src/sample/scripts/escalation.sh.sample;hb=refs/heads/V4_6_STABLE">/etc/pgpool-II/sample_scripts/escalation.sh.sample</ulink></entry>
+ <entry><ulink url="https://raw.githubusercontent.com/pgpool/pgpool2/refs/heads/master/src/sample/scripts/escalation.sh.sample">/etc/pgpool-II/sample_scripts/escalation.sh.sample</ulink></entry>
<entry>
任意の設定。Pgpool-IIのリーダー/スタンバイ切り替え時に、旧Watchdogリーダープロセスの異常終了によって旧Watchdogリーダーで仮想IPが起動したまま、新しいリーダーノードで仮想IPが起動されることを防ぐために、新しいリーダー以外で起動している仮想IPを停止するスクリプト。<xref linkend="guc-wd-escalation-command">で使用します。
</entry>
@@ -627,7 +627,7 @@ pgpool:4aa0cb9673e84b06d4c8a848c80eb5d0
</para>
<programlisting>
[root@server1 ~]# vi /etc/pgpool-II/pgpool.conf
-backend_clustering_mode = 'streaming_replication'
+backend_clustering_mode = streaming_replication
</programlisting>
</sect3>
diff --git a/doc.ja/src/sgml/example-replication-si-mode.sgml b/doc.ja/src/sgml/example-replication-si-mode.sgml
index 4ed4ee3e1..605a44278 100644
--- a/doc.ja/src/sgml/example-replication-si-mode.sgml
+++ b/doc.ja/src/sgml/example-replication-si-mode.sgml
@@ -433,7 +433,7 @@ default_transaction_isolation = 'repeatable read'
ネイティブレプリケーションモードの場合
</para>
<programlisting>
-backend_clustering_mode = 'native_replication'
+backend_clustering_mode = native_replication
</programlisting>
</listitem>
<listitem>
@@ -441,7 +441,7 @@ backend_clustering_mode = 'native_replication'
スナップショットアイソレーションモードの場合
</para>
<programlisting>
-backend_clustering_mode = 'snapshot_isolation'
+backend_clustering_mode = snapshot_isolation
</programlisting>
</listitem>
</itemizedlist>
diff --git a/doc.ja/src/sgml/loadbalance.sgml b/doc.ja/src/sgml/loadbalance.sgml
index 670187180..87cfa8e97 100644
--- a/doc.ja/src/sgml/loadbalance.sgml
+++ b/doc.ja/src/sgml/loadbalance.sgml
@@ -1421,7 +1421,7 @@ app_name_redirect_preference_list &gt; database_redirect_preference_list &gt; us
</varlistentry>
<varlistentry id="guc-disable-load-balance-on-write" xreflabel="disable_load_balance_on_write">
- <term><varname>disable_load_balance_on_write</varname> (<type>string</type>)
+ <term><varname>disable_load_balance_on_write</varname> (<type>enum</type>)
<indexterm>
<!--
<primary><varname>disable_load_balance_on_write</varname> configuration parameter</primary>
diff --git a/doc.ja/src/sgml/memcache.sgml b/doc.ja/src/sgml/memcache.sgml
index 6327cc207..0a6665138 100644
--- a/doc.ja/src/sgml/memcache.sgml
+++ b/doc.ja/src/sgml/memcache.sgml
@@ -256,7 +256,7 @@
<variablelist>
<varlistentry id="guc-memqcache-method" xreflabel="memqcache_method">
- <term><varname>memqcache_method</varname> (<type>string</type>)
+ <term><varname>memqcache_method</varname> (<type>enum</type>)
<indexterm>
<!--
<primary><varname>memqcache_method</varname> configuration parameter</primary>
@@ -290,7 +290,7 @@
<tbody>
<row>
- <entry><literal>'shmem'</literal></entry>
+ <entry><literal>shmem</literal></entry>
<!--
<entry>Use shared memory</entry>
-->
@@ -298,7 +298,7 @@
</row>
<row>
- <entry><literal>'memcached'</literal></entry>
+ <entry><literal>memcached</literal></entry>
<!--
<entry>Use <ulink url="http://memcached.org/">memcached</ulink></entry>
-->
@@ -338,9 +338,9 @@
<para>
<!--
- Default is <literal>'shmem'</literal>.
+ Default is <literal>shmem</literal>.
-->
- デフォルトは<literal>'shmem'</literal>です。
+ デフォルトは<literal>shmem</literal>です。
</para>
<para>
diff --git a/doc.ja/src/sgml/stream-check.sgml b/doc.ja/src/sgml/stream-check.sgml
index 7295faef9..4b3a85e69 100644
--- a/doc.ja/src/sgml/stream-check.sgml
+++ b/doc.ja/src/sgml/stream-check.sgml
@@ -388,7 +388,7 @@ GRANT pg_monitor TO sr_check_user;
</varlistentry>
<varlistentry id="guc-log-standby-delay" xreflabel="log_standby_delay">
- <term><varname>log_standby_delay</varname> (<type>string</type>)
+ <term><varname>log_standby_delay</varname> (<type>enum</type>)
<indexterm>
<!--
<primary><varname>log_standby_delay</varname> configuration parameter</primary>
@@ -426,7 +426,7 @@ GRANT pg_monitor TO sr_check_user;
<tbody>
<row>
- <entry><literal>'none'</literal></entry>
+ <entry><literal>none</literal></entry>
<!--
<entry>Never log the standby delay</entry>
-->
@@ -434,7 +434,7 @@ GRANT pg_monitor TO sr_check_user;
</row>
<row>
- <entry><literal>'always'</literal></entry>
+ <entry><literal>always</literal></entry>
<!--
<entry>Log the standby delay, every time the replication delay is checked</entry>
-->
@@ -442,7 +442,7 @@ GRANT pg_monitor TO sr_check_user;
</row>
<row>
- <entry><literal>'if_over_threshold'</literal></entry>
+ <entry><literal>if_over_threshold</literal></entry>
<!--
<entry>Only log the standby delay, when it exceeds <xref linkend="guc-delay-threshold"> value</entry>
-->
diff --git a/doc.ja/src/sgml/watchdog.sgml b/doc.ja/src/sgml/watchdog.sgml
index ceb546079..b4eff8db6 100644
--- a/doc.ja/src/sgml/watchdog.sgml
+++ b/doc.ja/src/sgml/watchdog.sgml
@@ -1320,7 +1320,7 @@
<variablelist>
<varlistentry id="guc-wd-lifecheck-method" xreflabel="wd_lifecheck_method">
- <term><varname>wd_lifecheck_method</varname> (<type>string</type>)
+ <term><varname>wd_lifecheck_method</varname> (<type>enum</type>)
<indexterm>
<!--
<primary><varname>wd_lifecheck_method</varname> configuration parameter</primary>
@@ -1331,11 +1331,11 @@
<listitem>
<para>
<!--
- Specifies the method of life check. This can be either of <literal>'heartbeat'</literal> (default),
- <literal>'query'</literal> or <literal>'external'</literal>.
+ Specifies the method of life check. This can be either of <literal>heartbeat</literal> (default),
+ <literal>query</literal> or <literal>external</literal>.
-->
死活監視の方法を指定します。
- 指定できる値は <literal>'heartbeat'</literal> (デフォルト)、<literal>'query'</literal>、または<literal>'external'</literal> です。
+ 指定できる値は <literal>heartbeat</literal> (デフォルト)、<literal>query</literal>、または<literal>external</literal> です。
</para>
<para>
<!--
@@ -1345,7 +1345,7 @@
If there are no signal for a certain period, watchdog regards is as failure
of the <productname>Pgpool-II</productname> .
-->
- <literal>'heartbeat'</literal>を指定した場合には、監視は「ハートビートモード」で行われます。
+ <literal>heartbeat</literal>を指定した場合には、監視は「ハートビートモード」で行われます。
watchdog は一定間隔でハートビート信号(UDP パケット)を他の<productname>Pgpool-II</productname>へ送信します。
またwatchdogは他の<productname>Pgpool-II</productname>から送られてくる信号を受信します。
これが一定時間以上途絶えた場合にはその<productname>Pgpool-II</productname>に障害が発生したと判断します。
diff --git a/doc/src/sgml/connection-pooling.sgml b/doc/src/sgml/connection-pooling.sgml
index 9f6a751cd..f6ebb773c 100644
--- a/doc/src/sgml/connection-pooling.sgml
+++ b/doc/src/sgml/connection-pooling.sgml
@@ -791,7 +791,7 @@
</varlistentry>
<varlistentry id="guc-log-backend-messages" xreflabel="log_backend_messages">
- <term><varname>log_backend_messages</varname> (<type>boolean</type>)
+ <term><varname>log_backend_messages</varname> (<type>enum</type>)
<indexterm>
<primary><varname>log_backend_messages</varname> configuration parameter</primary>
</indexterm>
diff --git a/doc/src/sgml/connection-settings.sgml b/doc/src/sgml/connection-settings.sgml
index 262c76d05..0f78b9931 100644
--- a/doc/src/sgml/connection-settings.sgml
+++ b/doc/src/sgml/connection-settings.sgml
@@ -655,10 +655,10 @@
This mode is most popular and recommended clustering mode. In this
mode <productname>PostgreSQL</productname> is responsible to
replicate each servers. To enable this mode, use
- 'streaming_replication' for
+ streaming_replication for
<varname>backend_clustering_mode</varname>.
<programlisting>
-backend_clustering_mode = 'streaming_replication'
+backend_clustering_mode = streaming_replication
</programlisting>
In this mode you can have up to 127 streaming replication standby servers.
Also it is possible not to have standby server at all.
@@ -686,10 +686,10 @@ backend_clustering_mode = 'streaming_replication'
<para>
This mode makes the <productname>Pgpool-II</productname> to
replicate data between <productname>PostgreSQL</productname>
- backends. To enable this mode, use 'native_replication' for
+ backends. To enable this mode, use native_replication for
<varname>backend_clustering_mode</varname>.
<programlisting>
-backend_clustering_mode = 'native_replication'
+backend_clustering_mode = native_replication
</programlisting>
In this mode you can have up to 127 standby replication servers.
Also it is possible not to have standby server at all.
@@ -1117,10 +1117,10 @@ backend_clustering_mode = 'native_replication'
This mode is similar to the native replication mode except it adds
the visibility consistency among nodes. The implementation is based
on a research paper <xref linkend="mishima2009">.
- To enable this mode, use 'snapshot_isolation' for
+ To enable this mode, use snapshot_isolation for
<varname>backend_clustering_mode</varname>.
<programlisting>
-backend_clustering_mode = 'snapshot_isolation'
+backend_clustering_mode = snapshot_isolation
</programlisting>
For example, you can avoid following inconsistency among nodes caused by the
visibility inconsistency. Here S1 and S2 denotes sessions, while N1
@@ -1194,10 +1194,10 @@ default_transaction_isolation = 'repeatable read'
<para>
In this mode
<productname>PostgreSQL</productname> is responsible to replicate
- each servers. To enable this mode, use 'logical_replication' for
+ each servers. To enable this mode, use logical_replication for
<varname>backend_clustering_mode</varname>.
<programlisting>
-backend_clustering_mode = 'logical_replication'
+backend_clustering_mode = logical_replication
</programlisting>
In this mode you can have up to 127 logical replication standby servers.
Also it is possible not to have standby server at all.
@@ -1218,10 +1218,10 @@ backend_clustering_mode = 'logical_replication'
<para>
This mode is used to couple <productname>Pgpool-II</productname>
with <acronym>Slony-I</acronym>. Slony-I is responsible for doing
- the actual data replication. To enable this mode, use 'slony' for
+ the actual data replication. To enable this mode, use slony for
<varname>backend_clustering_mode</varname>.
<programlisting>
-backend_clustering_mode = 'slony'
+backend_clustering_mode = slony
</programlisting>
In this mode you can have up to 127 replica servers. Also it is
possible not to have replica server at all.
@@ -1247,9 +1247,9 @@ backend_clustering_mode = 'slony'
In this mode, <productname>Pgpool-II</> does not care about the database synchronization.
It's user's responsibility to make the whole system does a meaningful thing.
Load balancing is <emphasis>not</emphasis> possible in the mode.
- To enable this mode, use 'raw' for <varname>backend_clustering_mode</varname>.
+ To enable this mode, use raw for <varname>backend_clustering_mode</varname>.
<programlisting>
-backend_clustering_mode = 'raw'
+backend_clustering_mode = raw
</programlisting>
</para>
</sect2>
diff --git a/doc/src/sgml/example-Aurora.sgml b/doc/src/sgml/example-Aurora.sgml
index ce94f1927..540132f32 100644
--- a/doc/src/sgml/example-Aurora.sgml
+++ b/doc/src/sgml/example-Aurora.sgml
@@ -24,7 +24,7 @@
from <filename>pgpool.conf.sample</filename>.
Make sure your <filename>pgpool.conf</filename> includes following line:
<programlisting>
-backend_clustering_mode = 'streaming_replication'
+backend_clustering_mode = streaming_replication
</programlisting>
</para>
</listitem>
diff --git a/doc/src/sgml/example-cluster.sgml b/doc/src/sgml/example-cluster.sgml
index d5a4150ea..91b86491e 100644
--- a/doc/src/sgml/example-cluster.sgml
+++ b/doc/src/sgml/example-cluster.sgml
@@ -190,25 +190,25 @@
<tbody>
<row>
<entry morerows='1'>Failover</entry>
- <entry><ulink url="https://git.postgresql.org/gitweb/?p=pgpool2.git;a=blob_plain;f=src/sample/scripts/failover.sh.sample;hb=refs/heads/V4_6_STABLE">/etc/pgpool-II/sample_scripts/failover.sh.sample</ulink></entry>
+ <entry><ulink url="https://raw.githubusercontent.com/pgpool/pgpool2/refs/heads/master/src/sample/scripts/failover.sh.sample">/etc/pgpool-II/sample_scripts/failover.sh.sample</ulink></entry>
<entry>Run by <xref linkend="GUC-FAILOVER-COMMAND"> to perform failover</entry>
</row>
<row>
- <entry><ulink url="https://git.postgresql.org/gitweb/?p=pgpool2.git;a=blob_plain;f=src/sample/scripts/follow_primary.sh.sample;hb=refs/heads/V4_6_STABLE">/etc/pgpool-II/sample_scripts/follow_primary.sh.sample</ulink></entry>
+ <entry><ulink url="https://raw.githubusercontent.com/pgpool/pgpool2/refs/heads/master/src/sample/scripts/follow_primary.sh.sample">/etc/pgpool-II/sample_scripts/follow_primary.sh.sample</ulink></entry>
<entry>Run by <xref linkend="GUC-FOLLOW-PRIMARY-COMMAND"> to synchronize the Standby with the new Primary after failover.</entry>
</row>
<row>
<entry morerows='1'>Online recovery</entry>
- <entry><ulink url="https://git.postgresql.org/gitweb/?p=pgpool2.git;a=blob_plain;f=src/sample/scripts/recovery_1st_stage.sample;hb=refs/heads/V4_6_STABLE">/etc/pgpool-II/sample_scripts/recovery_1st_stage.sample</ulink></entry>
+ <entry><ulink url="https://raw.githubusercontent.com/pgpool/pgpool2/refs/heads/master/src/sample/scripts/recovery_1st_stage.sample">/etc/pgpool-II/sample_scripts/recovery_1st_stage.sample</ulink></entry>
<entry>Run by <xref linkend="GUC-RECOVERY-1ST-STAGE-COMMAND"> to recovery a Standby node</entry>
</row>
<row>
- <entry><ulink url="https://git.postgresql.org/gitweb/?p=pgpool2.git;a=blob_plain;f=src/sample/scripts/pgpool_remote_start.sample;hb=refs/heads/V4_6_STABLE">/etc/pgpool-II/sample_scripts/pgpool_remote_start.sample</ulink></entry>
+ <entry><ulink url="https://raw.githubusercontent.com/pgpool/pgpool2/refs/heads/master/src/sample/scripts/pgpool_remote_start.sample">/etc/pgpool-II/sample_scripts/pgpool_remote_start.sample</ulink></entry>
<entry>Run after <xref linkend="GUC-RECOVERY-1ST-STAGE-COMMAND"> to start the Standby node</entry>
</row>
<row>
<entry morerows='1'>Watchdog</entry>
- <entry><ulink url="https://git.postgresql.org/gitweb/?p=pgpool2.git;a=blob_plain;f=src/sample/scripts/escalation.sh.sample;hb=refs/heads/V4_6_STABLE">/etc/pgpool-II/sample_scripts/escalation.sh.sample</ulink></entry>
+ <entry><ulink url="https://raw.githubusercontent.com/pgpool/pgpool2/refs/heads/master/src/sample/scripts/escalation.sh.sample">/etc/pgpool-II/sample_scripts/escalation.sh.sample</ulink></entry>
<entry>Optional Configuration. Run by <xref linkend="guc-wd-escalation-command"> to switch the Leader/Standby Pgpool-II safely</entry>
</row>
</tbody>
@@ -679,7 +679,7 @@ pgpool:4aa0cb9673e84b06d4c8a848c80eb5d0
</para>
<programlisting>
[root@server1 ~]# vi /etc/pgpool-II/pgpool.conf
-backend_clustering_mode = 'streaming_replication'
+backend_clustering_mode = streaming_replication
</programlisting>
</sect3>
diff --git a/doc/src/sgml/example-replication-si-mode.sgml b/doc/src/sgml/example-replication-si-mode.sgml
index e06b91e77..e4280837a 100644
--- a/doc/src/sgml/example-replication-si-mode.sgml
+++ b/doc/src/sgml/example-replication-si-mode.sgml
@@ -432,7 +432,7 @@ default_transaction_isolation = 'repeatable read'
Native replication mode
</para>
<programlisting>
-backend_clustering_mode = 'native_replication'
+backend_clustering_mode = native_replication
</programlisting>
</listitem>
<listitem>
@@ -440,7 +440,7 @@ backend_clustering_mode = 'native_replication'
Snapshot isolation mode
</para>
<programlisting>
-backend_clustering_mode = 'snapshot_isolation'
+backend_clustering_mode = snapshot_isolation
</programlisting>
</listitem>
</itemizedlist>
diff --git a/doc/src/sgml/loadbalance.sgml b/doc/src/sgml/loadbalance.sgml
index 2b186467a..ee19fabeb 100644
--- a/doc/src/sgml/loadbalance.sgml
+++ b/doc/src/sgml/loadbalance.sgml
@@ -1039,7 +1039,7 @@ app_name_redirect_preference_list &gt; database_redirect_preference_list &gt; us
</varlistentry>
<varlistentry id="guc-disable-load-balance-on-write" xreflabel="disable_load_balance_on_write">
- <term><varname>disable_load_balance_on_write</varname> (<type>string</type>)
+ <term><varname>disable_load_balance_on_write</varname> (<type>enum</type>)
<indexterm>
<primary><varname>disable_load_balance_on_write</varname> configuration parameter</primary>
</indexterm>
diff --git a/doc/src/sgml/memcache.sgml b/doc/src/sgml/memcache.sgml
index 9888b8865..065de41ee 100644
--- a/doc/src/sgml/memcache.sgml
+++ b/doc/src/sgml/memcache.sgml
@@ -192,7 +192,7 @@
<variablelist>
<varlistentry id="guc-memqcache-method" xreflabel="memqcache_method">
- <term><varname>memqcache_method</varname> (<type>string</type>)
+ <term><varname>memqcache_method</varname> (<type>enum</type>)
<indexterm>
<primary><varname>memqcache_method</varname> configuration parameter</primary>
</indexterm>
@@ -215,12 +215,12 @@
<tbody>
<row>
- <entry><literal>'shmem'</literal></entry>
+ <entry><literal>shmem</literal></entry>
<entry>Use shared memory</entry>
</row>
<row>
- <entry><literal>'memcached'</literal></entry>
+ <entry><literal>memcached</literal></entry>
<entry>Use <ulink url="http://memcached.org/">memcached</ulink></entry>
</row>
@@ -245,7 +245,7 @@
If you are not sure which memqcache_method to be used, start with <varname>shmem</varname>.
</para>
<para>
- Default is <literal>'shmem'</literal>.
+ Default is <literal>shmem</literal>.
</para>
<para>
diff --git a/doc/src/sgml/stream-check.sgml b/doc/src/sgml/stream-check.sgml
index 8515b6510..d2ca3ca49 100644
--- a/doc/src/sgml/stream-check.sgml
+++ b/doc/src/sgml/stream-check.sgml
@@ -310,7 +310,7 @@ GRANT pg_monitor TO sr_check_user;
</varlistentry>
<varlistentry id="guc-log-standby-delay" xreflabel="log_standby_delay">
- <term><varname>log_standby_delay</varname> (<type>string</type>)
+ <term><varname>log_standby_delay</varname> (<type>enum</type>)
<indexterm>
<primary><varname>log_standby_delay</varname> configuration parameter</primary>
</indexterm>
@@ -334,17 +334,17 @@ GRANT pg_monitor TO sr_check_user;
<tbody>
<row>
- <entry><literal>'none'</literal></entry>
+ <entry><literal>none</literal></entry>
<entry>Never log the standby delay</entry>
</row>
<row>
- <entry><literal>'always'</literal></entry>
+ <entry><literal>always</literal></entry>
<entry>Log the standby delay if it's greater than 0, every time the replication delay is checked</entry>
</row>
<row>
- <entry><literal>'if_over_threshold'</literal></entry>
+ <entry><literal>if_over_threshold</literal></entry>
<entry>Only log the standby delay, when it exceeds <xref linkend="guc-delay-threshold"> or <xref linkend="guc-delay-threshold-by-time"> value (the default)</entry>
</row>
</tbody>
diff --git a/doc/src/sgml/watchdog.sgml b/doc/src/sgml/watchdog.sgml
index 0a238f302..86812ce32 100644
--- a/doc/src/sgml/watchdog.sgml
+++ b/doc/src/sgml/watchdog.sgml
@@ -951,15 +951,15 @@ pgpool_port2 = 9999
<variablelist>
<varlistentry id="guc-wd-lifecheck-method" xreflabel="wd_lifecheck_method">
- <term><varname>wd_lifecheck_method</varname> (<type>string</type>)
+ <term><varname>wd_lifecheck_method</varname> (<type>enum</type>)
<indexterm>
<primary><varname>wd_lifecheck_method</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
- Specifies the method of life check. This can be either of <literal>'heartbeat'</literal> (default),
- <literal>'query'</literal> or <literal>'external'</literal>.
+ Specifies the method of life check. This can be either of <literal>heartbeat</literal> (default),
+ <literal>query</literal> or <literal>external</literal>.
</para>
<para>
<literal>heartbeat</literal>: In this mode, watchdog sends the heartbeat signals (<acronym>UDP</acronym> packets)
diff --git a/src/auth/pool_auth.c b/src/auth/pool_auth.c
index 54d646bc3..198d8c99e 100644
--- a/src/auth/pool_auth.c
+++ b/src/auth/pool_auth.c
@@ -58,7 +58,8 @@
#define MAX_SASL_PAYLOAD_LEN 1024
-static POOL_STATUS pool_send_backend_key_data(POOL_CONNECTION * frontend, int pid, int key, int protoMajor);
+static void pool_send_backend_key_data(POOL_CONNECTION * frontend, int pid,
+ char *key, int32 keylen, int protoMajor);
static int do_clear_text_password(POOL_CONNECTION * backend, POOL_CONNECTION * frontend, int reauth, int protoMajor);
static void pool_send_auth_fail(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * cp);
static int do_md5(POOL_CONNECTION * backend, POOL_CONNECTION * frontend, int reauth, int protoMajor,
@@ -92,9 +93,7 @@ connection_do_auth(POOL_CONNECTION_POOL_SLOT * cp, char *password)
int length;
int auth_kind;
char state;
- char *p;
- int pid,
- key;
+ int pid;
bool keydata_done;
/*
@@ -244,6 +243,9 @@ connection_do_auth(POOL_CONNECTION_POOL_SLOT * cp, char *password)
switch (kind)
{
+ char *p;
+ int32 keylen;
+
case 'K': /* backend key data */
keydata_done = true;
ereport(DEBUG1,
@@ -251,12 +253,14 @@ connection_do_auth(POOL_CONNECTION_POOL_SLOT * cp, char *password)
/* read message length */
pool_read_with_error(cp->con, &length, sizeof(length), "message length for authentication kind 'K'");
- if (ntohl(length) != 12)
+ length = ntohl(length);
+ keylen = length - sizeof(int32) - sizeof(int32);
+ if (keylen > MAX_CANCELKEY_LENGTH)
{
ereport(ERROR,
(errmsg("failed to authenticate"),
- errdetail("invalid backend key data length. received %d bytes when expecting 12 bytes"
- ,ntohl(length))));
+ errdetail("invalid backend key data length. received %d bytes exceeding %d",
+ ntohl(length), MAX_CANCELKEY_LENGTH)));
}
/* read pid */
@@ -264,9 +268,9 @@ connection_do_auth(POOL_CONNECTION_POOL_SLOT * cp, char *password)
cp->pid = pid;
/* read key */
- pool_read_with_error(cp->con, &key, sizeof(key),
- "key for authentication kind 'K'");
- cp->key = key;
+ keylen = length - sizeof(int32) - sizeof(int32);
+ p = pool_read2(cp->con, keylen);
+ memcpy(cp->key, p, keylen);
break;
case 'Z': /* Ready for query */
@@ -332,14 +336,15 @@ pool_do_auth(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * cp)
{
signed char kind;
int pid;
- int key;
int protoMajor;
int length;
int authkind;
int i;
int message_length = 0;
StartupPacket *sp;
-
+ int32 keylen; /* cancel key length */
+ char cancel_key[MAX_CANCELKEY_LENGTH];
+ char *p;
protoMajor = MAIN_CONNECTION(cp)->sp->major;
@@ -722,17 +727,23 @@ read_kind:
}
/*
- * message length (V3 only)
+ * Read BackendKeyData message length.
*/
if (protoMajor == PROTO_MAJOR_V3)
{
- if ((length = pool_read_message_length(cp)) != 12)
+ length = pool_read_message_length(cp);
+ keylen = length - sizeof(int32) - sizeof(int32);
+ if (keylen > MAX_CANCELKEY_LENGTH)
{
ereport(ERROR,
- (errmsg("authentication failed"),
- errdetail("invalid messages length(%d) for BackendKeyData", length)));
+ (errcode(ERRCODE_PROTOCOL_VIOLATION),
+ errmsg("cancel key length exceeds 256 bytes")));
}
}
+ else
+ keylen = 4;
+
+ elog(DEBUG1, "cancel key length: %d", keylen);
/*
* OK, read pid and secret key
@@ -758,13 +769,17 @@ read_kind:
CONNECTION_SLOT(cp, i)->pid = cp->info[i].pid = pid;
/* read key */
- if (pool_read(CONNECTION(cp, i), &key, sizeof(key)) < 0)
+ p = pool_read2(CONNECTION(cp, i), keylen);
+ if (p == NULL)
{
ereport(ERROR,
(errmsg("authentication failed"),
- errdetail("failed to read key in slot %d", i)));
+ errdetail("failed to read key of length: %d in slot %d", keylen, i)));
}
- CONNECTION_SLOT(cp, i)->key = cp->info[i].key = key;
+ memcpy(CONNECTION_SLOT(cp, i)->key, p, keylen);
+ memcpy(cp->info[i].key, p, keylen);
+ memcpy(cancel_key, p, keylen);
+ CONNECTION_SLOT(cp, i)->keylen = cp->info[i].keylen = keylen;
cp->info[i].major = sp->major;
@@ -791,10 +806,13 @@ read_kind:
(errmsg("authentication failed"),
errdetail("pool_do_auth: all backends are down")));
}
- if (pool_send_backend_key_data(frontend, pid, key, protoMajor))
- ereport(ERROR,
- (errmsg("authentication failed"),
- errdetail("failed to send backend data to frontend")));
+
+ /*
+ * We send the BackendKeyData to frontend, which belongs to the last
+ * backend in the backend group.
+ */
+ pool_send_backend_key_data(frontend, pid, cancel_key,
+ MAIN_CONNECTION(cp)->keylen, protoMajor);
return 0;
}
@@ -872,7 +890,8 @@ pool_do_reauth(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * cp)
pool_write_and_flush(frontend, &msglen, sizeof(msglen));
/* send BackendKeyData */
- pool_send_backend_key_data(frontend, MAIN_CONNECTION(cp)->pid, MAIN_CONNECTION(cp)->key, protoMajor);
+ pool_send_backend_key_data(frontend, MAIN_CONNECTION(cp)->pid, MAIN_CONNECTION(cp)->key,
+ MAIN_CONNECTION(cp)->keylen, protoMajor);
return 0;
}
@@ -903,29 +922,27 @@ pool_send_auth_fail(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * cp)
}
/*
- * Send backend key data to frontend. if success return 0 otherwise non 0.
+ * Send backend key data to frontend.
*/
-static POOL_STATUS pool_send_backend_key_data(POOL_CONNECTION * frontend, int pid, int key, int protoMajor)
+static void
+pool_send_backend_key_data(POOL_CONNECTION * frontend, int pid,
+ char *key, int32 keylen, int protoMajor)
{
char kind;
- int len;
+ int32 len;
/* Send backend key data */
kind = 'K';
pool_write(frontend, &kind, 1);
if (protoMajor == PROTO_MAJOR_V3)
{
- len = htonl(12);
+ len = htonl(sizeof(int32) + sizeof(int32) + keylen);
pool_write(frontend, &len, sizeof(len));
}
ereport(DEBUG1,
- (errmsg("sending backend key data"),
- errdetail("send pid %d to frontend", ntohl(pid))));
-
+ (errmsg("sending backend key data")));
pool_write(frontend, &pid, sizeof(pid));
- pool_write_and_flush(frontend, &key, sizeof(key));
-
- return 0;
+ pool_write_and_flush(frontend, key, keylen);
}
static void
diff --git a/src/context/pool_query_context.c b/src/context/pool_query_context.c
index 3b9028497..d398bee6d 100644
--- a/src/context/pool_query_context.c
+++ b/src/context/pool_query_context.c
@@ -4,7 +4,7 @@
* pgpool: a language independent connection pool server for PostgreSQL
* written by Tatsuo Ishii
*
- * Copyright (c) 2003-2024 PgPool Global Development Group
+ * Copyright (c) 2003-2025 PgPool Global Development Group
*
* Permission to use, copy, modify, and distribute this software and
* its documentation for any purpose and without fee is hereby
@@ -663,7 +663,8 @@ pool_send_and_wait(POOL_QUERY_CONTEXT * query_context,
CONNECTION(backend, i),
MAJOR(backend),
MAIN_CONNECTION(backend)->pid,
- MAIN_CONNECTION(backend)->key);
+ MAIN_CONNECTION(backend)->key,
+ MAIN_CONNECTION(backend)->keylen);
/*
* Check if some error detected. If so, emit log. This is useful when
@@ -880,7 +881,8 @@ pool_extended_send_and_wait(POOL_QUERY_CONTEXT * query_context,
CONNECTION(backend, i),
MAJOR(backend),
MAIN_CONNECTION(backend)->pid,
- MAIN_CONNECTION(backend)->key);
+ MAIN_CONNECTION(backend)->key,
+ MAIN_CONNECTION(backend)->keylen);
/*
* Check if some error detected. If so, emit log. This is useful
diff --git a/src/include/pcp/libpcp_ext.h b/src/include/pcp/libpcp_ext.h
index d79ddc156..3a6d87858 100644
--- a/src/include/pcp/libpcp_ext.h
+++ b/src/include/pcp/libpcp_ext.h
@@ -137,6 +137,11 @@ typedef enum
} ProcessStatus;
/*
+ * mamimum cancel key length
+*/
+#define MAX_CANCELKEY_LENGTH 256
+
+/*
* Connection pool information. Placed on shared memory area.
*/
typedef struct
@@ -147,7 +152,8 @@ typedef struct
int major; /* protocol major version */
int minor; /* protocol minor version */
int pid; /* backend process id */
- int key; /* cancel key */
+ char key[MAX_CANCELKEY_LENGTH]; /* cancel key */
+ int32 keylen; /* cancel key length */
int counter; /* used counter */
time_t create_time; /* connection creation time */
time_t client_connection_time; /* client connection time */
diff --git a/src/include/pool.h b/src/include/pool.h
index 28cf1757c..c34a06d2a 100644
--- a/src/include/pool.h
+++ b/src/include/pool.h
@@ -185,7 +185,7 @@ typedef struct CancelPacket
{
int protoVersion; /* Protocol version */
int pid; /* backend process id */
- int key; /* cancel key */
+ char key[MAX_CANCELKEY_LENGTH]; /* cancel key */
} CancelPacket;
#define MAX_PASSWORD_SIZE 1024
@@ -294,7 +294,12 @@ typedef struct
{
StartupPacket *sp; /* startup packet info */
int pid; /* backend pid */
- int key; /* cancel key */
+ char key[MAX_CANCELKEY_LENGTH]; /* cancel key */
+ /*
+ * Cancel key length. In protocol version 3.0, it is 4.
+ * In 3.2 or later, the maximum length is 256.
+ */
+ int32 keylen;
POOL_CONNECTION *con;
time_t closetime; /* absolute time in second when the connection
* closed if 0, that means the connection is
@@ -655,7 +660,7 @@ extern void pcp_main(int *fds);
extern void do_child(int *fds);
extern void child_exit(int code);
-extern void cancel_request(CancelPacket * sp);
+extern void cancel_request(CancelPacket * sp, int32 len);
extern void check_stop_request(void);
extern void pool_initialize_private_backend_status(void);
extern int send_to_pg_frontend(char *data, int len, bool flush);
diff --git a/src/include/protocol/pool_process_query.h b/src/include/protocol/pool_process_query.h
index 7f91dbbcd..e799b4d9d 100644
--- a/src/include/protocol/pool_process_query.h
+++ b/src/include/protocol/pool_process_query.h
@@ -3,7 +3,7 @@
* pgpool: a language independent connection pool server for PostgreSQL
* written by Tatsuo Ishii
*
- * Copyright (c) 2003-2020 PgPool Global Development Group
+ * Copyright (c) 2003-2025 PgPool Global Development Group
*
* Permission to use, copy, modify, and distribute this software and
* its documentation for any purpose and without fee is hereby
@@ -35,7 +35,7 @@ extern void per_node_statement_log(POOL_CONNECTION_POOL * backend,
extern int pool_extract_error_message(bool read_kind, POOL_CONNECTION * backend,
int major, bool unread, char **message);
extern POOL_STATUS do_command(POOL_CONNECTION * frontend, POOL_CONNECTION * backend,
- char *query, int protoMajor, int pid, int key, int no_ready_for_query);
+ char *query, int protoMajor, int pid, char *key, int keylen, int no_ready_for_query);
extern void do_query(POOL_CONNECTION * backend, char *query, POOL_SELECT_RESULT * *result, int major);
extern void free_select_result(POOL_SELECT_RESULT * result);
extern int compare(const void *p1, const void *p2);
diff --git a/src/include/protocol/pool_proto_modules.h b/src/include/protocol/pool_proto_modules.h
index 663dcc984..28668aa6e 100644
--- a/src/include/protocol/pool_proto_modules.h
+++ b/src/include/protocol/pool_proto_modules.h
@@ -151,7 +151,8 @@ extern int RowDescription(POOL_CONNECTION * frontend,
POOL_CONNECTION_POOL * backend,
short *result);
-extern void wait_for_query_response_with_trans_cleanup(POOL_CONNECTION * frontend, POOL_CONNECTION * backend, int protoVersion, int pid, int key);
+extern void wait_for_query_response_with_trans_cleanup(POOL_CONNECTION * frontend, POOL_CONNECTION * backend,
+ int protoVersion, int pid, char *key, int keylen);
extern POOL_STATUS wait_for_query_response(POOL_CONNECTION * frontend, POOL_CONNECTION * backend, int protoVersion);
extern bool is_select_query(Node *node, char *sql);
extern bool is_commit_query(Node *node);
diff --git a/src/protocol/child.c b/src/protocol/child.c
index 7aea33540..1ef88910d 100644
--- a/src/protocol/child.c
+++ b/src/protocol/child.c
@@ -624,14 +624,16 @@ read_startup_packet(POOL_CONNECTION * cp)
len = ntohl(len);
len -= sizeof(len);
- if (len <= 0 || len >= MAX_STARTUP_PACKET_LENGTH)
+ if (len < 4 || len > MAX_STARTUP_PACKET_LENGTH)
ereport(ERROR,
(errmsg("failed while reading startup packet"),
errdetail("incorrect packet length (%d)", len)));
sp->startup_packet = palloc0(len);
- /* read startup packet */
+ /*
+ * Read startup packet except the length of the message.
+ */
pool_read_with_error(cp, sp->startup_packet, len,
"startup packet");
@@ -861,7 +863,8 @@ connect_using_existing_connection(POOL_CONNECTION * frontend,
do_command(frontend, CONNECTION(backend, i),
command_buf, MAJOR(backend),
MAIN_CONNECTION(backend)->pid,
- MAIN_CONNECTION(backend)->key, 0);
+ MAIN_CONNECTION(backend)->key,
+ MAIN_CONNECTION(backend)->keylen, 0);
}
PG_CATCH();
{
@@ -902,7 +905,7 @@ connect_using_existing_connection(POOL_CONNECTION * frontend,
* process cancel request
*/
void
-cancel_request(CancelPacket * sp)
+cancel_request(CancelPacket * sp, int32 splen)
{
int len;
int fd;
@@ -911,8 +914,8 @@ cancel_request(CancelPacket * sp)
j,
k;
ConnectionInfo *c = NULL;
- CancelPacket cp;
bool found = false;
+ int32 keylen; /* cancel key length */
if (pool_config->log_client_messages)
ereport(LOG,
@@ -921,7 +924,20 @@ cancel_request(CancelPacket * sp)
ereport(DEBUG1,
(errmsg("Cancel request received")));
- /* look for cancel key from shmem info */
+ /*
+ * Cancel key length is cancel message length - cancel request code -
+ * process id.
+ */
+ keylen = splen - sizeof(int32) - sizeof(int32);
+
+ /*
+ * Look for cancel key from shmem info. Frontend should have saved one of
+ * cancel key among backend groups and sent it in the cancel request
+ * message. We are looking for the backend which has the same cancel key
+ * and pid. The query we want to cancel should have been running one the
+ * backend group. So some of query cancel requests may not work but it
+ * should not be a problem. They are just ignored by the backend.
+ */
for (i = 0; i < pool_config->num_init_children; i++)
{
for (j = 0; j < pool_config->max_pool; j++)
@@ -931,14 +947,19 @@ cancel_request(CancelPacket * sp)
c = pool_coninfo(i, j, k);
ereport(DEBUG2,
(errmsg("processing cancel request"),
- errdetail("connection info: address:%p database:%s user:%s pid:%d key:%d i:%d",
- c, c->database, c->user, ntohl(c->pid), ntohl(c->key), i)));
- if (c->pid == sp->pid && c->key == sp->key)
+ errdetail("connection info: address:%p database:%s user:%s pid:%d sp.pid:%d keylen:%d sp.keylen:%d i:%d",
+ c, c->database, c->user, ntohl(c->pid), ntohl(sp->pid),
+ c->keylen, keylen, i)));
+ if (c->pid == sp->pid && c->keylen == keylen &&
+ memcmp(c->key, sp->key, keylen) == 0)
{
ereport(DEBUG1,
(errmsg("processing cancel request"),
- errdetail("found pid:%d key:%d i:%d", ntohl(c->pid), ntohl(c->key), i)));
+ errdetail("found pid:%d keylen:%d i:%d", ntohl(c->pid), c->keylen, i)));
+ /*
+ * "c" is a pointer to i th child, j th pool, and 0 th backend.
+ */
c = pool_coninfo(i, j, 0);
found = true;
goto found;
@@ -951,12 +972,19 @@ found:
if (!found)
{
ereport(LOG,
- (errmsg("invalid cancel key: pid:%d key:%d", ntohl(sp->pid), ntohl(sp->key))));
+ (errmsg("invalid cancel key: pid:%d keylen:%d", ntohl(sp->pid), keylen)));
return; /* invalid key */
}
+ /*
+ * We are sending cancel request message to all backend groups. So some
+ * of query cancel requests may not work but it should not be a
+ * problem. They are just ignored by the backend.
+ */
for (i = 0; i < NUM_BACKENDS; i++, c++)
{
+ int32 cancel_request_code;
+
if (!VALID_BACKEND(i))
continue;
@@ -978,18 +1006,18 @@ found:
pool_set_db_node_id(con, i);
- len = htonl(sizeof(len) + sizeof(CancelPacket));
- pool_write(con, &len, sizeof(len));
-
- cp.protoVersion = sp->protoVersion;
- cp.pid = c->pid;
- cp.key = c->key;
+ len = htonl(splen + sizeof(int32)); /* splen does not include packet length field */
+ pool_write(con, &len, sizeof(len)); /* send cancel messages length */
+ cancel_request_code = htonl(PG_PROTOCOL(1234,5678)); /* cancel request code */
+ pool_write(con, &cancel_request_code, sizeof(int32));
+ pool_write(con, &c->pid, sizeof(int32)); /* send pid */
+ pool_write(con, c->key, keylen); /* send cancel key */
ereport(LOG,
- (errmsg("forwarding cancel request to backend"),
- errdetail("canceling backend pid:%d key: %d", ntohl(cp.pid), ntohl(cp.key))));
+ (errmsg("forwarding cancel request to backend %d", i),
+ errdetail("canceling backend pid: %d keylen: %d", ntohl(sp->pid), keylen)));
- if (pool_write_and_flush_noerror(con, &cp, sizeof(CancelPacket)) < 0)
+ if (pool_flush_noerror(con) < 0)
ereport(WARNING,
(errmsg("failed to send cancel request to backend %d", i)));
@@ -1978,7 +2006,7 @@ retry_startup:
/* cancel request? */
if (sp->major == 1234 && sp->minor == 5678)
{
- cancel_request((CancelPacket *) sp->startup_packet);
+ cancel_request((CancelPacket *) sp->startup_packet, sp->len);
pool_free_startup_packet(sp);
connection_count_down();
return NULL;
diff --git a/src/protocol/pool_process_query.c b/src/protocol/pool_process_query.c
index cb72e9c54..5a6b97ba1 100644
--- a/src/protocol/pool_process_query.c
+++ b/src/protocol/pool_process_query.c
@@ -3,7 +3,7 @@
* pgpool: a language independent connection pool server for PostgreSQL
* written by Tatsuo Ishii
*
- * Copyright (c) 2003-2024 PgPool Global Development Group
+ * Copyright (c) 2003-2025 PgPool Global Development Group
*
* Permission to use, copy, modify, and distribute this software and
* its documentation for any purpose and without fee is hereby
@@ -512,7 +512,8 @@ send_simplequery_message(POOL_CONNECTION * backend, int len, char *string, int m
*/
void
-wait_for_query_response_with_trans_cleanup(POOL_CONNECTION * frontend, POOL_CONNECTION * backend, int protoVersion, int pid, int key)
+wait_for_query_response_with_trans_cleanup(POOL_CONNECTION * frontend, POOL_CONNECTION * backend,
+ int protoVersion, int pid, char *key, int keylen)
{
PG_TRY();
{
@@ -527,8 +528,8 @@ wait_for_query_response_with_trans_cleanup(POOL_CONNECTION * frontend, POOL_CONN
cancel_packet.protoVersion = htonl(PROTO_CANCEL);
cancel_packet.pid = pid;
- cancel_packet.key = key;
- cancel_request(&cancel_packet);
+ memcpy(cancel_packet.key, key, keylen);
+ cancel_request(&cancel_packet, keylen + sizeof(int32) + sizeof(int32));
}
PG_RE_THROW();
@@ -1481,7 +1482,7 @@ pool_send_readyforquery(POOL_CONNECTION * frontend)
*/
POOL_STATUS
do_command(POOL_CONNECTION * frontend, POOL_CONNECTION * backend,
- char *query, int protoMajor, int pid, int key, int no_ready_for_query)
+ char *query, int protoMajor, int pid, char *key, int keylen, int no_ready_for_query)
{
int len;
char kind;
@@ -1502,7 +1503,8 @@ do_command(POOL_CONNECTION * frontend, POOL_CONNECTION * backend,
backend,
protoMajor,
pid,
- key);
+ key,
+ keylen);
/*
* We must check deadlock error here. If a deadlock error is detected by a
@@ -2767,7 +2769,7 @@ insert_lock(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * backend, char *qu
else
{
status = do_command(frontend, MAIN(backend), qbuf, MAJOR(backend), MAIN_CONNECTION(backend)->pid,
- MAIN_CONNECTION(backend)->key, 0);
+ MAIN_CONNECTION(backend)->key, MAIN_CONNECTION(backend)->keylen, 0);
}
}
else if (lock_kind == 2)
@@ -2825,7 +2827,7 @@ insert_lock(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * backend, char *qu
else
{
status = do_command(frontend, MAIN(backend), qbuf, MAJOR(backend), MAIN_CONNECTION(backend)->pid,
- MAIN_CONNECTION(backend)->key, 0);
+ MAIN_CONNECTION(backend)->key, MAIN_CONNECTION(backend)->keylen ,0);
}
}
}
@@ -2843,7 +2845,8 @@ insert_lock(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * backend, char *qu
{
if (deadlock_detected)
status = do_command(frontend, CONNECTION(backend, i), POOL_ERROR_QUERY, PROTO_MAJOR_V3,
- MAIN_CONNECTION(backend)->pid, MAIN_CONNECTION(backend)->key, 0);
+ MAIN_CONNECTION(backend)->pid,
+ MAIN_CONNECTION(backend)->key, MAIN_CONNECTION(backend)->keylen, 0);
else
{
if (lock_kind == 1)
@@ -2858,7 +2861,8 @@ insert_lock(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * backend, char *qu
else
{
status = do_command(frontend, CONNECTION(backend, i), qbuf, PROTO_MAJOR_V3,
- MAIN_CONNECTION(backend)->pid, MAIN_CONNECTION(backend)->key, 0);
+ MAIN_CONNECTION(backend)->pid,
+ MAIN_CONNECTION(backend)->key, MAIN_CONNECTION(backend)->keylen, 0);
}
}
else if (lock_kind == 2 || lock_kind == 3)
@@ -2933,7 +2937,8 @@ static POOL_STATUS add_lock_target(POOL_CONNECTION * frontend, POOL_CONNECTION_P
per_node_statement_log(backend, MAIN_NODE_ID, "LOCK TABLE pgpool_catalog.insert_lock IN SHARE ROW EXCLUSIVE MODE");
if (do_command(frontend, MAIN(backend), "LOCK TABLE pgpool_catalog.insert_lock IN SHARE ROW EXCLUSIVE MODE",
- PROTO_MAJOR_V3, MAIN_CONNECTION(backend)->pid, MAIN_CONNECTION(backend)->key, 0) != POOL_CONTINUE)
+ PROTO_MAJOR_V3, MAIN_CONNECTION(backend)->pid,
+ MAIN_CONNECTION(backend)->key, MAIN_CONNECTION(backend)->keylen, 0) != POOL_CONTINUE)
ereport(ERROR,
(errmsg("unable to add lock target"),
errdetail("do_command returned DEADLOCK status")));
@@ -3043,7 +3048,8 @@ static POOL_STATUS insert_oid_into_insert_lock(POOL_CONNECTION * frontend,
per_node_statement_log(backend, MAIN_NODE_ID, qbuf);
status = do_command(frontend, MAIN(backend), qbuf, PROTO_MAJOR_V3,
- MAIN_CONNECTION(backend)->pid, MAIN_CONNECTION(backend)->key, 0);
+ MAIN_CONNECTION(backend)->pid,
+ MAIN_CONNECTION(backend)->key, MAIN_CONNECTION(backend)->keylen, 0);
return status;
}
@@ -4140,7 +4146,8 @@ start_internal_transaction(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * ba
per_node_statement_log(backend, i, "BEGIN");
if (do_command(frontend, CONNECTION(backend, i), "BEGIN", MAJOR(backend),
- MAIN_CONNECTION(backend)->pid, MAIN_CONNECTION(backend)->key, 0) != POOL_CONTINUE)
+ MAIN_CONNECTION(backend)->pid,
+ MAIN_CONNECTION(backend)->key, MAIN_CONNECTION(backend)->keylen, 0) != POOL_CONTINUE)
ereport(ERROR,
(errmsg("unable to start the internal transaction"),
errdetail("do_command returned DEADLOCK status")));
@@ -4205,7 +4212,8 @@ end_internal_transaction(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * back
PG_TRY();
{
if (do_command(frontend, CONNECTION(backend, i), "COMMIT", MAJOR(backend),
- MAIN_CONNECTION(backend)->pid, MAIN_CONNECTION(backend)->key, 1) != POOL_CONTINUE)
+ MAIN_CONNECTION(backend)->pid,
+ MAIN_CONNECTION(backend)->key, MAIN_CONNECTION(backend)->keylen, 1) != POOL_CONTINUE)
{
ereport(ERROR,
(errmsg("unable to COMMIT the transaction"),
@@ -4258,7 +4266,8 @@ end_internal_transaction(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * back
PG_TRY();
{
if (do_command(frontend, MAIN(backend), "COMMIT", MAJOR(backend),
- MAIN_CONNECTION(backend)->pid, MAIN_CONNECTION(backend)->key, 1) != POOL_CONTINUE)
+ MAIN_CONNECTION(backend)->pid,
+ MAIN_CONNECTION(backend)->key, MAIN_CONNECTION(backend)->keylen, 1) != POOL_CONTINUE)
{
ereport(ERROR,
(errmsg("unable to COMMIT the transaction"),
diff --git a/src/protocol/pool_proto_modules.c b/src/protocol/pool_proto_modules.c
index a6336f8e2..c2802998a 100644
--- a/src/protocol/pool_proto_modules.c
+++ b/src/protocol/pool_proto_modules.c
@@ -2490,7 +2490,7 @@ static POOL_STATUS close_standby_transactions(POOL_CONNECTION * frontend,
per_node_statement_log(backend, i, "COMMIT");
if (do_command(frontend, CONNECTION(backend, i), "COMMIT", MAJOR(backend),
MAIN_CONNECTION(backend)->pid,
- MAIN_CONNECTION(backend)->key, 0) != POOL_CONTINUE)
+ MAIN_CONNECTION(backend)->key, MAIN_CONNECTION(backend)->keylen, 0) != POOL_CONTINUE)
ereport(ERROR,
(errmsg("unable to close standby transactions"),
errdetail("do_command returned DEADLOCK status")));
diff --git a/src/rewrite/pool_lobj.c b/src/rewrite/pool_lobj.c
index 38187fe14..7601a9316 100644
--- a/src/rewrite/pool_lobj.c
+++ b/src/rewrite/pool_lobj.c
@@ -5,7 +5,7 @@
* pgpool: a language independent connection pool server for PostgreSQL
* written by Tatsuo Ishii
*
- * Copyright (c) 2003-2010 PgPool Global Development Group
+ * Copyright (c) 2003-2025 PgPool Global Development Group
*
* Permission to use, copy, modify, and distribute this software and
* its documentation for any purpose and without fee is hereby
@@ -174,7 +174,7 @@ pool_rewrite_lo_creat(char kind, char *packet, int packet_len,
snprintf(qbuf, sizeof(qbuf), "LOCK TABLE %s IN SHARE ROW EXCLUSIVE MODE", pool_config->lobj_lock_table);
per_node_statement_log(backend, MAIN_NODE_ID, qbuf);
status = do_command(frontend, MAIN(backend), qbuf, MAJOR(backend), MAIN_CONNECTION(backend)->pid,
- MAIN_CONNECTION(backend)->key, 0);
+ MAIN_CONNECTION(backend)->key, MAIN_CONNECTION(backend)->keylen, 0);
if (status == POOL_END)
{
ereport(WARNING,
diff --git a/src/sample/pgpool.conf.sample-stream b/src/sample/pgpool.conf.sample-stream
index 9478198d7..301289c92 100644
--- a/src/sample/pgpool.conf.sample-stream
+++ b/src/sample/pgpool.conf.sample-stream
@@ -19,12 +19,12 @@
#------------------------------------------------------------------------------
# BACKEND CLUSTERING MODE
-# Choose one of: 'streaming_replication', 'native_replication',
-# 'logical_replication', 'slony', 'raw' or 'snapshot_isolation'
+# Choose one of: streaming_replication, native_replication,
+# logical_replication, slony, raw or snapshot_isolation
# (change requires restart)
#------------------------------------------------------------------------------
-backend_clustering_mode = 'streaming_replication'
+backend_clustering_mode = streaming_replication
#------------------------------------------------------------------------------
# CONNECTIONS
@@ -262,7 +262,7 @@ backend_clustering_mode = 'streaming_replication'
# Log any backend messages
# Valid values are none, terse and verbose
-#log_standby_delay = 'if_over_threshold'
+#log_standby_delay = if_over_threshold
# Log standby delay
# Valid values are combinations of always,
# if_over_threshold, none
@@ -457,33 +457,33 @@ backend_clustering_mode = 'streaming_replication'
# If off, SQL comments effectively prevent the judgment
# (pre 3.4 behavior).
-#disable_load_balance_on_write = 'transaction'
+#disable_load_balance_on_write = transaction
# Load balance behavior when write query is issued
# in an explicit transaction.
#
# Valid values:
#
- # 'transaction' (default):
+ # transaction (default):
# if a write query is issued, subsequent
# read queries will not be load balanced
# until the transaction ends.
#
- # 'trans_transaction':
+ # trans_transaction:
# if a write query is issued, subsequent
# read queries in an explicit transaction
# will not be load balanced until the session ends.
#
- # 'dml_adaptive':
+ # dml_adaptive:
# Queries on the tables that have already been
# modified within the current explicit transaction will
# not be load balanced until the end of the transaction.
#
- # 'always':
+ # always:
# if a write query is issued, read queries will
# not be load balanced until the session ends.
#
# Note that any query not in an explicit transaction
- # is not affected by the parameter except 'always'.
+ # is not affected by the parameter except always.
#dml_adaptive_object_relationship_list= ''
# comma separated list of object pairs
@@ -494,7 +494,7 @@ backend_clustering_mode = 'streaming_replication'
# example: 'tb_t1:tb_t2,insert_tb_f_func():tb_f,tb_v:my_view'
# Note: function name in this list must also be present in
# the write_function_list
- # only valid for disable_load_balance_on_write = 'dml_adaptive'.
+ # only valid for disable_load_balance_on_write = dml_adaptive.
#statement_level_load_balance = off
# Enables statement level load balancing
@@ -855,8 +855,8 @@ backend_clustering_mode = 'streaming_replication'
# '' to disable monitoring
# (change requires restart)
-#wd_lifecheck_method = 'heartbeat'
- # Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
+#wd_lifecheck_method = heartbeat
+ # Method of watchdog lifecheck (heartbeat or query or external)
# (change requires restart)
#wd_interval = 10
# lifecheck interval (sec) > 0
@@ -964,27 +964,27 @@ backend_clustering_mode = 'streaming_replication'
#memory_cache_enabled = off
# If on, use the memory cache functionality, off by default
# (change requires restart)
-#memqcache_method = 'shmem'
- # Cache storage method. either 'shmem'(shared memory) or
- # 'memcached'. 'shmem' by default
+#memqcache_method = shmem
+ # Cache storage method. either shmem(shared memory) or
+ # memcached. shmem by default
# (change requires restart)
#memqcache_memcached_host = 'localhost'
# Memcached host name or IP address. Mandatory if
- # memqcache_method = 'memcached'.
+ # memqcache_method = memcached.
# Defaults to localhost.
# (change requires restart)
#memqcache_memcached_port = 11211
- # Memcached port number. Mandatory if memqcache_method = 'memcached'.
+ # Memcached port number. Mandatory if memqcache_method = memcached.
# Defaults to 11211.
# (change requires restart)
#memqcache_total_size = 64MB
# Total memory size in bytes for storing memory cache.
- # Mandatory if memqcache_method = 'shmem'.
+ # Mandatory if memqcache_method = shmem.
# Defaults to 64MB.
# (change requires restart)
#memqcache_max_num_cache = 1000000
# Total number of cache entries. Mandatory
- # if memqcache_method = 'shmem'.
+ # if memqcache_method = shmem.
# Each cache entry consumes 48 bytes on shared memory.
# Defaults to 1,000,000(45.8MB).
# (change requires restart)
@@ -1002,7 +1002,7 @@ backend_clustering_mode = 'streaming_replication'
# Must be smaller than memqcache_cache_block_size. Defaults to 400KB.
# (change requires restart)
#memqcache_cache_block_size = 1MB
- # Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.
+ # Cache block size in bytes. Mandatory if memqcache_method = shmem.
# Defaults to 1MB.
# (change requires restart)
#memqcache_oiddir = '/var/log/pgpool/oiddir'
diff --git a/src/tools/pcp/pcp_frontend_client.c b/src/tools/pcp/pcp_frontend_client.c
index be192dd27..ecc922b93 100644
--- a/src/tools/pcp/pcp_frontend_client.c
+++ b/src/tools/pcp/pcp_frontend_client.c
@@ -751,7 +751,7 @@ output_procinfo_result(PCPResultInfo * pcpResInfo, bool all, bool verbose)
format = format_titles(titles, types, sizeof(titles)/sizeof(char *));
else
{
- format = "%s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s\n";
+ format = "%s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s\n";
}
for (i = 0; i < array_size; i++)