summaryrefslogtreecommitdiff
path: root/src/pl/plpgsql
AgeCommit message (Collapse)Author
11 daysUse palloc_object() and palloc_array() in more areas of the treeMichael Paquier
The idea is to encourage more the use of these new routines across the tree, as these offer stronger type safety guarantees than palloc(). The following paths are included in this batch, treating all the areas proposed by the author for the most trivial changes, except src/backend (by far the largest batch): src/bin/ src/common/ src/fe_utils/ src/include/ src/pl/ src/test/ src/tutorial/ Similar work has been done in 31d3847a37be. The code compiles the same before and after this commit, with the following exceptions due to changes in line numbers because some of the new allocation formulas are shorter: blkreftable.c pgfnames.c pl_exec.c Author: David Geier <geidav.pg@gmail.com> Discussion: https://postgr.es/m/ad0748d4-3080-436e-b0bc-ac8f86a3466a@gmail.com
2025-09-16Provide more-specific error details/hints for function lookup failures.Tom Lane
Up to now we've contented ourselves with a one-size-fits-all error hint when we fail to find any match to a function or procedure call. That was mostly okay in the beginning, but it was never great, and since the introduction of named arguments it's really not adequate. We at least ought to distinguish "function name doesn't exist" from "function name exists, but not with those argument names". And the rules for named-argument matching are arcane enough that some more detail seems warranted if we match the argument names but the call still doesn't work. This patch creates a framework for dealing with these problems: FuncnameGetCandidates and related code will now pass back a bitmask of flags showing how far the match succeeded. This allows a considerable amount of granularity in the reports. The set-bits-in-a-bitmask approach means that when there are multiple candidate functions, the report will reflect the match(es) that got the furthest, which seems correct. Also, we can avoid mentioning "maybe add casts" unless failure to match argument types is actually the issue. Extend the same return-a-bitmask approach to OpernameGetCandidates. The issues around argument names don't apply to operator syntax, but it still seems worth distinguishing between "there is no operator of that name" and "we couldn't match the argument types". While at it, adjust these messages and related ones to more strictly separate "detail" from "hint", following our message style guidelines' distinction between those. Reported-by: Dominique Devienne <ddevienne@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/1756041.1754616558@sss.pgh.pa.us
2025-09-12Allow redeclaration of typedef yyscan_tPeter Eisentraut
This is allowed in C11, so we don't need the workaround guards against it anymore. This effectively reverts commit 382092a0cd2 that put these guards in place. Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/10d32190-f31b-40a5-b177-11db55597355@eisentraut.org
2025-08-03Reduce leakage during PL/pgSQL function compilation.Tom Lane
format_procedure leaks memory, so run it in a short-lived context not the session-lifespan cache context for the PL/pgSQL function. parse_datatype called the core parser in the function's cache context, thus leaking potentially a lot of storage into that context. We were also being a bit careless with the TypeName structures made in that code path and others. Most of the time we don't need to retain the TypeName, so make sure it is made in the short-lived temp context, and copy it only if we do need to retain it. These are far from the only leaks in PL/pgSQL compilation, but they're the biggest as far as I've seen, and further improvement looks like it'd require delicate and bug-prone surgery. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
2025-07-09Use pg_assume() to avoid compiler warning below exec_set_found()Andres Freund
The warning, visible when building with -O3 and a recent-ish gcc, is due to gcc not realizing that found is a byvalue type and therefore will never be interpreted as a varlena type. Discussion: https://postgr.es/m/3prdb6hkep3duglhsujrn52bkvnlkvhc54fzvph2emrsm4vodl@77yy6j4hkemb Discussion: https://postgr.es/m/20230316172818.x6375uvheom3ibt2%40awork3.anarazel.de Discussion: https://postgr.es/m/20240207203138.sknifhlppdtgtxnk%40awork3.anarazel.de
2025-07-07Restore the ability to run pl/pgsql expression queries in parallel.Tom Lane
pl/pgsql's notion of an "expression" is very broad, encompassing any SQL SELECT query that returns a single column and no more than one row. So there are cases, for example evaluation of an aggregate function, where the query involves significant work and it'd be useful to run it with parallel workers. This used to be possible, but commits 3eea7a0c9 et al unintentionally disabled it. The simplest fix is to make exec_eval_expr() pass maxtuples = 0 rather than 2 to exec_run_select(). This avoids the new rule that we will never use parallelism when a nonzero "count" limit is passed to ExecutorRun(). (Note that the pre-3eea7a0c9 behavior was indeed unsafe, so reverting that rule is not in the cards.) The reason for passing 2 before was that exec_eval_expr() will throw an error if it gets more than one returned row, so we figured that as soon as we have two rows we know that will happen and we might as well stop running the query. That choice was cost-free when it was made; but disabling parallelism is far from cost-free, so now passing 2 amounts to optimizing a failure case at the expense of useful cases. An expression query that can return more than one row is certainly broken. People might now need to wait a bit longer to discover such breakage; but hopefully few will use enormously expensive cases as their first test of new pl/pgsql logic. Author: Dipesh Dhameliya <dipeshdhameliya125@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CABgZEgdfbnq9t6xXJnmXbChNTcWFjeM_6nuig41tm327gYi2ig@mail.gmail.com Backpatch-through: 13
2025-06-30Improve error report for PL/pgSQL reserved word used as a field name.Tom Lane
The current code in resolve_column_ref (dating to commits 01f7d2990 and fe24d7816) believes that not finding a RECFIELD datum is a can't-happen case, in consequence of which I didn't spend a whole lot of time considering what to do if it did happen. But it turns out that it *can* happen if the would-be field name is a fully-reserved PL/pgSQL keyword. Change the error message to describe that situation, and add a test case demonstrating it. This might need further refinement if anyone can find other ways to trigger a failure here; but without an example it's not clear what other error to throw. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://postgr.es/m/2185258.1745617445@sss.pgh.pa.us
2025-06-30De-reserve keywords EXECUTE and STRICT in PL/pgSQL.Tom Lane
On close inspection, there does not seem to be a strong reason why these should be fully-reserved keywords. I guess they just escaped consideration in previous attempts to minimize PL/pgSQL's list of reserved words. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://postgr.es/m/2185258.1745617445@sss.pgh.pa.us
2025-05-28Fix memory leakage when function compilation fails.Tom Lane
In pl_comp.c, initially create the plpgsql function's cache context under the assumed-short-lived caller's context, and reparent it under CacheMemoryContext only upon success. This avoids a process-lifespan leak of 8kB or more if the function contains syntax errors. (This leakage has existed for a long time without many complaints, but as we move towards a possibly multi-threaded future, getting rid of process-lifespan leaks grows more important.) In funccache.c, arrange to reclaim the CachedFunction struct in case the language-specific compile callback function throws an error; previously, that resulted in an independent process-lifespan leak. This is arguably a new bug in v18, since the leakage now occurred for SQL-language functions as well as plpgsql. Also, don't fill fn_xmin/fn_tid/dcallback until after successful completion of the compile callback. This avoids a scenario where a partially-built function cache might appear already valid upon later inspection, and another scenario where dcallback might fail upon being presented with an incomplete cache entry. We would have to reach such a faulty cache entry via a pre-existing fn_extra pointer, so I'm not sure these scenarios correspond to any live bug. (The predecessor code in pl_comp.c never took any care about this, and we've heard no complaints about that.) Still, it's better to be careful. Given the lack of field complaints, I'm not very excited about back-patching any of this; but it seems still in-scope for v18. Discussion: https://postgr.es/m/999171.1748300004@sss.pgh.pa.us
2025-05-05Translation updatesPeter Eisentraut
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: f90ee4803c30491e5c49996b973b8a30de47bfb2
2025-04-02Change SQL-language functions to use the plan cache.Tom Lane
In the historical implementation of SQL functions (if they don't get inlined), we built plans for all the contained queries at first call within an outer query, and then re-used those plans for the duration of the outer query, and then forgot everything. This was not ideal, not least because the plans could not be customized to specific values of the function's parameters. Our plancache infrastructure seems mature enough to be used here. That will solve both the problem with not being able to build custom plans and the problem with not being able to share work across successive outer queries. Aside from those performance concerns, this change fixes a longstanding bugaboo with SQL functions: you could not write DDL that would affect later statements in the same function. That's mostly still true with new-style SQL functions, since the results of parse analysis are baked into the stored query trees (and protected by dependency records). But for old-style SQL functions, it will now work much as it does with PL/pgSQL functions, because we delay parse analysis and planning of each query until we're ready to run it. Some edge cases that require replanning are now handled better too; see for example the new rowsecurity test, where we now detect an RLS context change that was previously missed. One other edge-case change that might be worthy of a release note is that we now insist that a SQL function's result be generated by the physically-last query within it. Previously, if the last original query was deleted by a DO INSTEAD NOTHING rule, we'd be willing to take the result from the preceding query instead. This behavior was undocumented except in source-code comments, and it seems hard to believe that anyone's relying on it. Along the way to this feature, we needed a few infrastructure changes: * The plancache can now take either a raw parse tree or an analyzed-but-not-rewritten Query as the starting point for a CachedPlanSource. If given a Query, it is caller's responsibility that nothing will happen to invalidate that form of the query. We use this for new-style SQL functions, where what's in pg_proc is serialized Query(s) and we trust the dependency mechanism to disallow DDL that would break those. * The plancache now offers a way to invoke a post-rewrite callback to examine/modify the rewritten parse tree when it is rebuilding the parse trees after a cache invalidation. We need this because SQL functions sometimes adjust the parse tree to make its output exactly match the declared result type; if the plan gets rebuilt, that has to be re-done. * There is a new backend module utils/cache/funccache.c that abstracts the idea of caching data about a specific function usage (a particular function and set of input data types). The code in it is moved almost verbatim from PL/pgSQL, which has done that for a long time. We use that logic now for SQL-language functions too, and maybe other PLs will have use for it in the future. Author: Alexander Pyhalov <a.pyhalov@postgrespro.ru> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://postgr.es/m/8216639.NyiUUSuA9g@aivenlaptop
2025-03-26Use PG_MODULE_MAGIC_EXT in our installable shared libraries.Tom Lane
It seems potentially useful to label our shared libraries with version information, now that a facility exists for retrieving that. This patch labels them with the PG_VERSION string. There was some discussion about using semantic versioning conventions, but that doesn't seem terribly helpful for modules with no SQL-level presence; and for those that do have SQL objects, we typically expect them to support multiple revisions of the SQL definitions, so it'd still not be very helpful. I did not label any of src/test/modules/. It seems unnecessary since we don't install those, and besides there ought to be someplace that still provides test coverage for the original PG_MODULE_MAGIC macro. Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/dd4d1b59-d0fe-49d5-b28f-1e463b68fa32@gmail.com
2025-03-22plpgsql: make WHEN OTHERS distinct from WHEN SQLSTATE '00000'.Tom Lane
The catchall exception condition OTHERS was represented as sqlerrstate == 0, which was a poor choice because that comes out the same as SQLSTATE '00000'. While we don't issue that as an error code ourselves, there isn't anything particularly stopping users from doing so. Use -1 instead, which can't match any allowed SQLSTATE string. While at it, invent a macro PLPGSQL_OTHERS to use instead of a hard-coded magic number. While this seems like a bug fix, I'm inclined not to back-patch. It seems barely possible that someone has written code like this and would be annoyed by changing the behavior in a minor release. Reported-by: David Fiedler <david.fido.fiedler@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAHjN70-=H5EpTOuZVbC8mPvRS5EfZ4MY2=OUdVDWoyGvKhb+Rw@mail.gmail.com
2025-03-21Revert inappropriate weakening of an Assert in plpgsql.Tom Lane
Commit 682ce911f modified exec_save_simple_expr to accept a Param in the tlist of a Gather node, rather than the normal case of a Var referencing the Gather's input. It turns out that this was a kluge to work around the bug later fixed in 0f7ec8d9c, namely that setrefs.c was failing to replace Params in upper plan nodes with Var references to the same Params appearing in the child tlists. With that fixed, there seems no reason to continue to allow a Param here. (Moreover, even if we did expect a Param here, the semantically correct thing to do would be to take the Param as the expression being sought. Whatever it may represent, it is *not* a reference to the child.) Hence, revert that part of 682ce911f. That all happened a long time ago. However, since the net effect here is just to tighten an Assert condition, I'm content to change it only in master. Discussion: https://postgr.es/m/1565347.1742572349@sss.pgh.pa.us
2025-03-21Fix plpgsql's handling of simple expressions in scrollable cursors.Tom Lane
exec_save_simple_expr did not account for the possibility that standard_planner would stick a Materialize node atop the plan of even a simple Result, if CURSOR_OPT_SCROLL is set. This led to an "unexpected plan node type" error. This is a very old bug, but it'd only be reached by declaring a cursor for a "SELECT simple-expression" query and explicitly marking it scrollable, which is an odd thing to do. So the lack of prior reports isn't too surprising. Bug: #18859 Reported-by: Olleg Samoylov <splarv@ya.ru> Author: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18859-0d5f28ac99a37059@postgresql.org Backpatch-through: 13
2025-03-13pg_noreturn to replace pg_attribute_noreturn()Peter Eisentraut
We want to support a "noreturn" decoration on more compilers besides just GCC-compatible ones, but for that we need to move the decoration in front of the function declaration instead of either behind it or wherever, which is the current style afforded by GCC-style attributes. Also rename the macro to "pg_noreturn" to be similar to the C11 standard "noreturn". pg_noreturn is now supported on all compilers that support C11 (using _Noreturn), as well as GCC-compatible ones (using __attribute__, as before), as well as MSVC (using __declspec). (When PostgreSQL requires C11, the latter two variants can be dropped.) Now, all supported compilers effectively support pg_noreturn, so the extra code for !HAVE_PG_ATTRIBUTE_NORETURN can be dropped. This also fixes a possible problem if third-party code includes stdnoreturn.h, because then the current definition of #define pg_attribute_noreturn() __attribute__((noreturn)) would cause an error. Note that the C standard does not support a noreturn attribute on function pointer types. So we have to drop these here. There are only two instances at this time, so it's not a big loss. In one case, we can make up for it by adding the pg_noreturn to a wrapper function and adding a pg_unreachable(), in the other case, the latter was already done before. Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/flat/pxr5b3z7jmkpenssra5zroxi7qzzp6eswuggokw64axmdixpnk@zbwxuq7gbbcw
2025-03-07Include column name in build_attrmap_by_position's error reports.Tom Lane
Formerly we only provided the column number, but it's frequently more useful to mention the column name. The input tupdesc often doesn't have useful column names, but the output tupdesc usually contains user-supplied names, so report that one. Author: Marcos Pegoraro <marcos@f10.com.br> Co-authored-by: jian he <jian.universality@gmail.com> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Co-authored-by: Erik Wienhold <ewie@ewie.name> Reviewed-by: Vladlen Popolitov <v.popolitov@postgrespro.ru> Discussion: https://postgr.es/m/CAB-JLwanky28gjAMdnMh1CjyO1b2zLdr6UOA1-oY9G7PVL9KKQ@mail.gmail.com
2025-03-03Allow => syntax for named cursor arguments in plpgsql.Tom Lane
We've traditionally accepted "name := value" syntax for cursor arguments in plpgsql. But it turns out that the equivalent statements in Oracle use "name => value". Since we accept both forms of punctuation for function arguments, it makes sense to do the same here. Author: Pavel Stehule <pavel.stehule@gmail.com> Reviewed-by: Gilles Darold <gilles@darold.net> Discussion: https://postgr.es/m/CAFj8pRA3d0ARQEMbABa1n6q25AUdNmyO8aGs56XNf9pD4sRMjQ@mail.gmail.com
2025-02-11Allow extension functions to participate in in-place updates.Tom Lane
Commit 1dc5ebc90 allowed PL/pgSQL to perform in-place updates of expanded-object variables that are being updated with assignments like "x := f(x, ...)". However this was allowed only for a hard-wired list of functions f(), since we need to be sure that f() will not modify the variable if it fails. It was always envisioned that we should make that extensible, but at the time we didn't have a good way to do so. Since then we've invented the idea of "support functions" to allow attaching specialized optimization knowledge to functions, and that is a perfect mechanism for doing this. Hence, adjust PL/pgSQL to use a support function request instead of hard-wired logic to decide if in-place update is safe. Preserve the previous optimizations by creating support functions for the three functions that were previously hard-wired. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
2025-02-11Implement new optimization rule for updates of expanded variables.Tom Lane
If a read/write expanded variable is declared locally to the assignment statement that is updating it, and it is referenced exactly once in the assignment RHS, then we can optimize the operation as a direct update of the expanded value, whether or not the function(s) operating on it can be trusted not to modify the value before throwing an error. This works because if an error does get thrown, we no longer care what value the variable has. In cases where that doesn't work, fall back to the previous rule that checks for safety of the top-level function. In any case, postpone determination of whether these optimizations are feasible until we are executing a Param referencing the target variable and that variable holds a R/W expanded object. While the previous incarnation of exec_check_rw_parameter was pretty cheap, this is a bit less so, and our plan to invoke support functions will make it even less so. So avoiding the check for variables where it couldn't be useful should be a win. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
2025-02-11Detect whether plpgsql assignment targets are "local" variables.Tom Lane
Mark whether the target of a potentially optimizable assignment is "local", in the sense of being declared inside any exception block that could trap an error thrown from the assignment. (This implies that we needn't preserve the variable's value in case of an error. This patch doesn't do anything with the knowledge, but the next one will.) Normally, this requires a post-parsing scan of the function's parse tree, since we don't know while parsing a BEGIN ... construct whether we will find EXCEPTION at its end. However, if there are no BEGIN ... EXCEPTION blocks in the function at all, then all assignments are local, even those to variables representing function arguments. We optimize that common case by initializing the target_is_local flags to "true", and fixing them up with a post-scan only if we found EXCEPTION. Note that variables' default-value expressions are never interesting for expanded-variable optimization, since they couldn't contain a reference to the target variable anyway. But the code is set up to compute their target_param and target_is_local correctly anyway, for consistency and in case someone thinks of a use for that data. I added a bit of plpgsql_dumptree support to help verify that this code sets the flags as expected. I also added a plpgsql_dumptree call in plpgsql_compile_inline. It was at best an oversight that "#option dump" didn't work in a DO block; now it does. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
2025-02-11Preliminary refactoring of plpgsql expression construction.Tom Lane
This short and boring patch simply moves the responsibility for initializing PLpgSQL_expr.target_param into plpgsql parsing, rather than doing it at first execution of the expr as before. This doesn't save anything in terms of runtime, since the work was trivial and done only once per expr anyway. But it makes the info available during parsing, which will be useful for the next step. Likewise set PLpgSQL_expr.func during parsing. According to the comments, this was once impossible; but it's certainly possible since we invented the plpgsql_curr_compile variable. Again, this saves little runtime, but it seems far cleaner conceptually. While at it, I reordered stuff in struct PLpgSQL_expr to make it clearer which fields are filled when, and merged some duplicative code in pl_gram.y. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
2025-02-11Refactor pl_funcs.c to provide a usage-independent tree walker.Tom Lane
We haven't done this up to now because there was only one use-case, namely plpgsql_free_function_memory's search for expressions to clean up. However an upcoming patch has another need for walking plpgsql functions' statement trees, so let's create sharable tree-walker infrastructure in the same style as expression_tree_walker(). This patch actually makes the code shorter, although that's mainly down to having used a more compact coding style. (I didn't write a separate subroutine for each statement type, and I made use of some newer notations like foreach_ptr.) Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
2025-01-29Fix grammatical typos around possessive "its"John Naylor
Some places spelled it "it's", which is short for "it is". In passing, fix a couple other nearby grammatical errors. Author: Jacob Brazeal <jacob.brazeal@gmail.com> Discussion: https://postgr.es/m/CA+COZaAO8g1KJCV0T48=CkJMjAnnfTGLWOATz+2aCh40c2Nm+g@mail.gmail.com
2025-01-24Return yyparse() result not via global variablePeter Eisentraut
Instead of passing the parse result from yyparse() via a global variable, pass it via a function output argument. This complements earlier work to make the parsers reentrant. Discussion: Discussion: https://www.postgresql.org/message-id/flat/eb6faeac-2a8a-4b69-9189-c33c520e5b7b@eisentraut.org
2025-01-08plpgsql: pure parser and reentrant scannerPeter Eisentraut
The plpgsql scanner is a wrapper around the core scanner, which already uses the flex %option reentrant. This patch only pushes up a few levels the place where the scanner handle is allocated. Before, it was allocated in pl_scanner.c in a global variable, so to the outside the scanner was not reentrant. Now, it is allocated in pl_comp.c and is passed as an argument to yyparse(), similar to how it is handled in other reentrant scanners. Also use flex yyextra to handle context information, instead of global variables. Again, this uses the existing yyextra support in the core scanner. This complements the other changes to make the scanner reentrant. The bison option %pure-parser is used to make the generated parser pure. This happens in the usual way, since plpgsql has its own bison parser definition. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://www.postgresql.org/message-id/flat/eb6faeac-2a8a-4b69-9189-c33c520e5b7b@eisentraut.org
2025-01-08Remove useless function declarationPeter Eisentraut
This function apparently never existed.
2025-01-01Update copyright for 2025Bruce Momjian
Backpatch-through: 13
2024-12-30Update obsolete reference to plpgsql's gram.y file.Tom Lane
This was evidently missed in 05346c131, which renamed that file to pl_gram.y. Japin Li Discussion: https://postgr.es/m/ME0P300MB0445F7CA7456C2AC67D37A01B6092@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
2024-12-26plpgsql: Rename a variable for clarityPeter Eisentraut
Rename "core_yy_extra_type core_yy" to "core_yy_extra". The previous name was a bit unclear and confusing. The new name matches the name used elsewhere for the same purpose, for example in src/backend/parser/gramparse.h.
2024-12-25Partial pgindent of .l and .y filesPeter Eisentraut
Trying to clean up the code a bit while we're working on these files for the reentrant scanner/pure parser patches. This cleanup only touches the code sections after the second '%%' in each file, via a manually-supervised and locally hacked up pgindent.
2024-12-24Remove pgrminclude annotationsPeter Eisentraut
Per git log, the last time someone tried to do something with pgrminclude was around 2011. Many (not all) of the "pgrminclude ignore" annotations are of a newer date but seem to have just been copied around during refactorings and file moves and don't seem to reflect an actual need anymore. There have been some parallel experiments with include-what-you-use (IWYU) annotations, but these don't seem to correspond very strongly to pgrminclude annotations, so there is no value in keeping the existing ones even for that kind of thing. So, wipe them all away. We can always add new ones in the future based on actual needs. Discussion: https://www.postgresql.org/message-id/flat/2d4dc7b2-cb2e-49b1-b8ca-ba5f7024f05b%40eisentraut.org
2024-11-28Remove useless casts to (void *)Peter Eisentraut
Many of them just seem to have been copied around for no real reason. Their presence causes (small) risks of hiding actual type mismatches or silently discarding qualifiers Discussion: https://www.postgresql.org/message-id/flat/461ea37c-8b58-43b4-9736-52884e862820@eisentraut.org
2024-10-28Remove unused #include's from contrib, pl, test .c filesPeter Eisentraut
as determined by IWYU Similar to commit dbbca2cf299, but for contrib, pl, and src/test/. Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://www.postgresql.org/message-id/flat/0df1d5b1-8ca8-4f84-93be-121081bde049%40eisentraut.org
2024-10-24Generalize plpgsql's heuristic for importing expanded objects.Tom Lane
If a R/W expanded-object pointer is passed as a function parameter, take ownership of the object, regardless of its type. Previously this happened only for expanded arrays, but that was a result of sloppy thinking. (If the plpgsql function did not end by returning the object, the result would be to leak the object until the surrounding memory context is cleaned up. That's not awful, since non-expanded values have always been managed that way, but we can do better.) Per discussion with Michel Pelletier. There's a lot more to do here to make plpgsql work efficiently with expanded objects that aren't arrays, but this is an easy first step. Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
2024-10-16Further refine _SPI_execute_plan's rule for atomic execution.Tom Lane
Commit 2dc1deaea turns out to have been still a brick shy of a load, because CALL statements executing within a plpgsql exception block could still pass the wrong snapshot to stable functions within the CALL's argument list. That happened because standard_ProcessUtility forces isAtomicContext to true if IsTransactionBlock is true, which it always will be inside a subtransaction. Then ExecuteCallStmt would think it does not need to push a new snapshot --- but _SPI_execute_plan didn't do so either, since it thought it was in nonatomic mode. The best fix for this seems to be for _SPI_execute_plan to operate in atomic execution mode if IsSubTransaction() is true, even when the SPI context as a whole is non-atomic. This makes _SPI_execute_plan have the same rules about when non-atomic execution is allowed as _SPI_commit/_SPI_rollback have about when COMMIT/ROLLBACK are allowed, which seems appropriately symmetric. (If anyone ever tries to allow COMMIT/ROLLBACK inside a subtransaction, this would all need to be rethought ... but I'm unconvinced that such a thing could be logically consistent at all.) For further consistency, also check IsSubTransaction() in SPI_inside_nonatomic_context. That does not matter for its one present-day caller StartTransaction, which can't be reached inside a subtransaction. But if any other callers ever arise, they'd presumably want this definition. Per bug #18656 from Alexander Alehin. Back-patch to all supported branches, like previous fixes in this area. Discussion: https://postgr.es/m/18656-cade1780866ef66c@postgresql.org
2024-09-09Don't bother checking the result of SPI_connect[_ext] anymore.Tom Lane
SPI_connect/SPI_connect_ext have not returned any value other than SPI_OK_CONNECT since commit 1833f1a1c in v10; any errors are thrown via ereport. (The most likely failure is out-of-memory, which has always been thrown that way, so callers had better be prepared for such errors.) This makes it somewhat pointless to check these functions' result, and some callers within our code haven't been bothering; indeed, the only usage example within spi.sgml doesn't bother. So it's likely that the omission has propagated into extensions too. Hence, let's standardize on not checking, and document the return value as historical, while not actually changing these functions' behavior. (The original proposal was to change their return type to "void", but that would needlessly break extensions that are conforming to the old practice.) This saves a small amount of boilerplate code in a lot of places. Stepan Neretin Discussion: https://postgr.es/m/CAMaYL5Z9Uk8cD9qGz9QaZ2UBJFOu7jFx5Mwbznz-1tBbPDQZow@mail.gmail.com
2024-09-05Fix misleading error message contextPeter Eisentraut
Author: Pavel Stehule <pavel.stehule@gmail.com> Reviewed-by: Stepan Neretin <sncfmgg@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAFj8pRAw+OkVW=FgMKHKyvY3CgtWy3cWdY7XT+S5TJaTttu=oA@mail.gmail.com
2024-08-07Fix edge case in plpgsql's make_callstmt_target().Tom Lane
If the plancache entry for the CALL statement is already stale, it's possible for us to fetch an old procedure OID out of it, and then fail with "cache lookup failed for function NNN". In ordinary usage this never happens because make_callstmt_target is called just once immediately after building the plancache entry. It can be forced however by setting up an erroneous CALL (that causes make_callstmt_target itself to report an error), then dropping/recreating the target procedure, then repeating the erroneous CALL. To fix, use SPI_plan_get_cached_plan() to fetch the plancache's plan, rather than assuming we can use SPI_plan_get_plan_sources(). This shouldn't add any noticeable overhead in the normal case, and in the stale-plan case we'd have had to replan anyway a little further down. The other callers of SPI_plan_get_plan_sources() seem OK, because either they don't need up-to-date plans or they know that the query was just (re) planned. But add some commentary in hopes of not falling into this trap again. Per bug #18574 from Song Hongyu. Back-patch to v14 where this coding was introduced. (Older branches have comparable code, but it's run after any required replanning, so there's no issue.) Discussion: https://postgr.es/m/18574-2ce7ba3249221389@postgresql.org
2024-08-02Include bison header files into implementation filesPeter Eisentraut
Before Bison 3.4, the generated parser implementation files run afoul of -Wmissing-variable-declarations (in spite of commit ab61c40bfa2) because declarations for yylval and possibly yylloc are missing. The generated header files contain an extern declaration, but the implementation files don't include the header files. Since Bison 3.4, the generated implementation files automatically include the generated header files, so then it works. To make this work with older Bison versions as well, include the generated header file from the .y file. (With older Bison versions, the generated implementation file contains effectively a copy of the header file pasted in, so including the header file is redundant. But we know this works anyway because the core grammar uses this arrangement already.) Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
2024-07-25Add extern declarations for Bison global variablesPeter Eisentraut
This adds extern declarations for some global variables produced by Bison that are not already declared in its generated header file. This is a workaround to be able to add -Wmissing-variable-declarations to the global set of warning options in the near future. Another longer-term solution would be to convert these grammars to "pure" parsers in Bison, to avoid global variables altogether. Note that the core grammar is already pure, so this patch did not need to touch it. Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
2024-07-22Doc: improve description of plpgsql's FETCH and MOVE commands.Tom Lane
We were not being clear about which variants of the "direction" clause are permitted in MOVE. Also, the text seemed to be written with only the FETCH/MOVE NEXT case in mind, so it didn't apply very well to other variants. Also, document that "MOVE count IN cursor" only works if count is a constant. This is not the whole truth, because some other cases such as a parenthesized expression will also work, but we want to push people to use "MOVE FORWARD count" instead. The constant case is enough to cover what we allow in plain SQL, and that seems sufficient to claim support for. Update a comment in pl_gram.y claiming that we don't document that point. Per gripe from Philipp Salvisberg. Discussion: https://postgr.es/m/172155553388.702.7932496598218792085@wrigleys.postgresql.org
2024-07-02Convert some extern variables to staticPeter Eisentraut
These probably should have been static all along, it was only forgotten out of sloppiness. Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
2024-06-13When replanning a plpgsql "simple expression", check it's still simple.Tom Lane
The previous coding here assumed that we didn't need to recheck any of the querytree tests made in exec_simple_check_plan(). I think we supposed that those properties were fully determined by the syntax of the source text and hence couldn't change. That is true for most of them, but at least hasTargetSRFs and hasAggs can change by dint of forcibly dropping an originally-referenced function and recreating it with new properties. That leads to "unexpected plan node type" or similar failures. These tests are pretty cheap compared to the cost of replanning, so rather than sweat over exactly which properties need to be rechecked, let's just recheck them all. Hence, factor out those tests into a new function exec_is_simple_query(), and rearrange callers as needed. A second problem in the same area was that if we failed during replanning or during exec_save_simple_expr(), we'd potentially leave behind now-dangling pointers to the old simple expression, potentially resulting in crashes later. To fix, clear those pointers before replanning. The v12 code looks quite different in this area but still has the bug about needing to recheck query simplicity. I chose to back-patch all of the plpgsql_simple.sql test script, which formerly didn't exist in this branch. Per bug #18497 from Nikita Kalinin. Back-patch to all supported branches. Discussion: https://postgr.es/m/18497-fe93b6da82ce31d4@postgresql.org
2024-06-07Fix behavior of stable functions called from a CALL's argument list.Tom Lane
If the CALL is within an atomic context (e.g. there's an outer transaction block), _SPI_execute_plan should acquire a fresh snapshot to execute any such functions with. We failed to do that and instead passed them the Portal snapshot, which had been acquired at the start of the current SQL command. This'd lead to seeing stale values of rows modified since the start of the command. This is arguably a bug in 84f5c2908: I failed to see that "are we in non-atomic mode" needs to be defined the same way as it is further down in _SPI_execute_plan, i.e. check !_SPI_current->atomic not just options->allow_nonatomic. Alternatively the blame could be laid on plpgsql, which is unconditionally passing allow_nonatomic = true for CALL/DO even when it knows it's in an atomic context. However, fixing it in spi.c seems like a better idea since that will also fix the problem for any extensions that may have copied plpgsql's coding pattern. While here, update an obsolete comment about _SPI_execute_plan's snapshot management. Per report from Victor Yegorov. Back-patch to all supported versions. Discussion: https://postgr.es/m/CAGnEboiRe+fG2QxuBO2390F7P8e2MQ6UyBjZSL_w1Cej+E4=Vw@mail.gmail.com
2024-05-15Fix handling of polymorphic output arguments for procedures.Tom Lane
Most of the infrastructure for procedure arguments was already okay with polymorphic output arguments, but it turns out that CallStmtResultDesc() was a few bricks shy of a load here. It thought all it needed to do was call build_function_result_tupdesc_t, but that function specifically disclaims responsibility for resolving polymorphic arguments. Failing to handle that doesn't seem to be a problem for CALL in plpgsql, but CALL from plain SQL would get errors like "cannot display a value of type anyelement", or even crash outright. In v14 and later we can simply examine the exposed types of the CallStmt.outargs nodes to get the right type OIDs. But it's a lot more complicated to fix in v12/v13, because those versions don't have CallStmt.outargs, nor do they do expand_function_arguments until ExecuteCallStmt runs. We have to duplicatively run expand_function_arguments, and then re-determine which elements of the args list are output arguments. Per bug #18463 from Drew Kimball. Back-patch to all supported versions, since it's busted in all of them. Discussion: https://postgr.es/m/18463-f8cd77e12564d8a2@postgresql.org
2024-05-06Translation updatesPeter Eisentraut
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: be182cc55e6f72c66215fd9b38851969e3ce5480
2024-04-10Fix plpgsql's handling of -- comments following expressions.Tom Lane
Up to now, read_sql_construct() has collected all the source text from the statement or expression's initial token up to the character just before the "until" token. It normally tries to strip trailing whitespace from that, largely for neatness. If there was a "-- text" comment after the expression, this resulted in removing the newline that terminates the comment, which creates a hazard if we try to paste the collected text into a larger SQL construct without inserting a newline after it. In particular this caused our handling of CASE constructs to fail if there's a comment after a WHEN expression. Commit 4adead1d2 noticed a similar problem with cursor arguments, and worked around it through the rather crude hack of suppressing the whitespace-trimming behavior for those. Rather than do that and leave the hazard open for future hackers to trip over, let's fix it properly. pl_scanner.c already has enough infrastructure to report the end location of the expression's last token, so we can copy up to that location and never collect any trailing whitespace or comment to begin with. Erik Wienhold and Tom Lane, per report from Michal Bartak. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAAVzF_FjRoi8fOVuLCZhQJx6HATQ7MKm=aFOHWZODFnLmjX-xA@mail.gmail.com
2024-03-17Add RETURNING support to MERGE.Dean Rasheed
This allows a RETURNING clause to be appended to a MERGE query, to return values based on each row inserted, updated, or deleted. As with plain INSERT, UPDATE, and DELETE commands, the returned values are based on the new contents of the target table for INSERT and UPDATE actions, and on its old contents for DELETE actions. Values from the source relation may also be returned. As with INSERT/UPDATE/DELETE, the output of MERGE ... RETURNING may be used as the source relation for other operations such as WITH queries and COPY commands. Additionally, a special function merge_action() is provided, which returns 'INSERT', 'UPDATE', or 'DELETE', depending on the action executed for each row. The merge_action() function can be used anywhere in the RETURNING list, including in arbitrary expressions and subqueries, but it is an error to use it anywhere outside of a MERGE query's RETURNING list. Dean Rasheed, reviewed by Isaac Morland, Vik Fearing, Alvaro Herrera, Gurjeet Singh, Jian He, Jeff Davis, Merlin Moncure, Peter Eisentraut, and Wolfgang Walther. Discussion: http://postgr.es/m/CAEZATCWePEGQR5LBn-vD6SfeLZafzEm2Qy_L_Oky2=qw2w3Pzg@mail.gmail.com
2024-03-03Redefine backend ID to be an index into the proc arrayHeikki Linnakangas
Previously, backend ID was an index into the ProcState array, in the shared cache invalidation manager (sinvaladt.c). The entry in the ProcState array was reserved at backend startup by scanning the array for a free entry, and that was also when the backend got its backend ID. Things become slightly simpler if we redefine backend ID to be the index into the PGPROC array, and directly use it also as an index to the ProcState array. This uses a little more memory, as we reserve a few extra slots in the ProcState array for aux processes that don't need them, but the simplicity is worth it. Aux processes now also have a backend ID. This simplifies the reservation of BackendStatusArray and ProcSignal slots. You can now convert a backend ID into an index into the PGPROC array simply by subtracting 1. We still use 0-based "pgprocnos" in various places, for indexes into the PGPROC array, but the only difference now is that backend IDs start at 1 while pgprocnos start at 0. (The next commmit will get rid of the term "backend ID" altogether and make everything 0-based.) There is still a 'backendId' field in PGPROC, now part of 'vxid' which encapsulates the backend ID and local transaction ID together. It's needed for prepared xacts. For regular backends, the backendId is always equal to pgprocno + 1, but for prepared xact PGPROC entries, it's the ID of the original backend that processed the transaction. Reviewed-by: Andres Freund, Reid Thompson Discussion: https://www.postgresql.org/message-id/8171f1aa-496f-46a6-afc3-c46fe7a9b407@iki.fi