--- /dev/null
+From owner-pgsql-hackers@hub.org Wed Sep 22 20:31:02 1999
+Received: from renoir.op.net (root@renoir.op.net [209.152.193.4])
+ by candle.pha.pa.us (8.9.0/8.9.0) with ESMTP id UAA15611
+ for <maillist@candle.pha.pa.us>; Wed, 22 Sep 1999 20:31:01 -0400 (EDT)
+Received: from hub.org (hub.org [216.126.84.1]) by renoir.op.net (o1/$ Revision: 1.18 $) with ESMTP id UAA02926 for <maillist@candle.pha.pa.us>; Wed, 22 Sep 1999 20:21:24 -0400 (EDT)
+Received: from hub.org (hub.org [216.126.84.1])
+ by hub.org (8.9.3/8.9.3) with ESMTP id UAA75413;
+ Wed, 22 Sep 1999 20:09:35 -0400 (EDT)
+ (envelope-from owner-pgsql-hackers@hub.org)
+Received: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Wed, 22 Sep 1999 20:08:50 +0000 (EDT)
+Received: (from majordom@localhost)
+ by hub.org (8.9.3/8.9.3) id UAA75058
+ for pgsql-hackers-outgoing; Wed, 22 Sep 1999 20:06:58 -0400 (EDT)
+ (envelope-from owner-pgsql-hackers@postgreSQL.org)
+Received: from sss.sss.pgh.pa.us (sss.pgh.pa.us [209.114.166.2])
+ by hub.org (8.9.3/8.9.3) with ESMTP id UAA74982
+ for <pgsql-hackers@postgreSQL.org>; Wed, 22 Sep 1999 20:06:25 -0400 (EDT)
+ (envelope-from tgl@sss.pgh.pa.us)
+Received: from sss.sss.pgh.pa.us (localhost [127.0.0.1])
+ by sss.sss.pgh.pa.us (8.9.1/8.9.1) with ESMTP id UAA06411
+ for <pgsql-hackers@postgreSQL.org>; Wed, 22 Sep 1999 20:05:40 -0400 (EDT)
+To: pgsql-hackers@postgreSQL.org
+Subject: [HACKERS] Progress report: buffer refcount bugs and SQL functions
+Date: Wed, 22 Sep 1999 20:05:39 -0400
+Message-ID: <6408.938045139@sss.pgh.pa.us>
+From: Tom Lane <tgl@sss.pgh.pa.us>
+Sender: owner-pgsql-hackers@postgreSQL.org
+Precedence: bulk
+Status: RO
+
+I have been finding a lot of interesting stuff while looking into
+the buffer reference count/leakage issue.
+
+It turns out that there were two specific things that were camouflaging
+the existence of bugs in this area:
+
+1. The BufferLeakCheck routine that's run at transaction commit was
+only looking for nonzero PrivateRefCount to indicate a missing unpin.
+It failed to notice nonzero LastRefCount --- which meant that an
+error in refcount save/restore usage could leave a buffer pinned,
+and BufferLeakCheck wouldn't notice.
+
+2. The BufferIsValid macro, which you'd think just checks whether
+it's handed a valid buffer identifier or not, actually did more:
+it only returned true if the buffer ID was valid *and* the buffer
+had positive PrivateRefCount. That meant that the common pattern
+ if (BufferIsValid(buf))
+ ReleaseBuffer(buf);
+wouldn't complain if it were handed a valid but already unpinned buffer.
+And that behavior masks bugs that result in buffers being unpinned too
+early. For example, consider a sequence like
+
+1. LockBuffer (buffer now has refcount 1). Store reference to
+ a tuple on that buffer page in a tuple table slot.
+2. Copy buffer reference to a second tuple-table slot, but forget to
+ increment buffer's refcount.
+3. Release second tuple table slot. Buffer refcount drops to 0,
+ so it's unpinned.
+4. Release original tuple slot. Because of BufferIsValid behavior,
+ no assert happens here; in fact nothing at all happens.
+
+This is, of course, buggy code: during the interval from 3 to 4 you
+still have an apparently valid tuple reference in the original slot,
+which someone might try to use; but the buffer it points to is unpinned
+and could be replaced at any time by another backend.
+
+In short, we had errors that would mask both missing-pin bugs and
+missing-unpin bugs. And naturally there were a few such bugs lurking
+behind them...
+
+3. The buffer refcount save/restore stuff, which I had suspected
+was useless, is not only useless but also buggy. The reason it's
+buggy is that it only works if used in a nested fashion. You could
+save state A, pin some buffers, save state B, pin some more
+buffers, restore state B (thereby unpinning what you pinned since
+the save), and finally restore state A (unpinning the earlier stuff).
+What you could not do is save state A, pin, save B, pin more, then
+restore state A --- that might unpin some of A's buffers, or some
+of B's buffers, or some unforeseen combination thereof. If you
+restore A and then restore B, you do not necessarily return to a zero-
+pins state, either. And it turns out the actual usage pattern was a
+nearly random sequence of saves and restores, compounded by a failure to
+do all of the restores reliably (which was masked by the oversight in
+BufferLeakCheck).
+
+
+What I have done so far is to rip out the buffer refcount save/restore
+support (including LastRefCount), change BufferIsValid to a simple
+validity check (so that you get an assert if you unpin something that
+was pinned), change ExecStoreTuple so that it increments the refcount
+when it is handed a buffer reference (for symmetry with ExecClearTuple's
+decrement of the refcount), and fix about a dozen bugs exposed by these
+changes.
+
+I am still getting Buffer Leak notices in the "misc" regression test,
+specifically in the queries that invoke more than one SQL function.
+What I find there is that SQL functions are not always run to
+completion. Apparently, when a function can return multiple tuples,
+it won't necessarily be asked to produce them all. And when it isn't,
+postquel_end() isn't invoked for the function's current query, so its
+tuple table isn't cleared, so we have dangling refcounts if any of the
+tuples involved are in disk buffers.
+
+It may be that the save/restore code was a misguided attempt to fix
+this problem. I can't tell. But I think what we really need to do is
+find some way of ensuring that Postquel function execution contexts
+always get shut down by the end of the query, so that they don't leak
+resources.
+
+I suppose a straightforward approach would be to keep a list of open
+function contexts somewhere (attached to the outer execution context,
+perhaps), and clean them up at outer-plan shutdown.
+
+What I am wondering, though, is whether this addition is actually
+necessary, or is it a bug that the functions aren't run to completion
+in the first place? I don't really understand the semantics of this
+"nested dot notation". I suppose it is a Berkeleyism; I can't find
+anything about it in the SQL92 document. The test cases shown in the
+misc regress test seem peculiar, not to say wrong. For example:
+
+regression=> SELECT p.hobbies.equipment.name, p.hobbies.name, p.name FROM person p;
+name |name |name
+-------------+-----------+-----
+advil |posthacking|mike
+peet's coffee|basketball |joe
+hightops |basketball |sally
+(3 rows)
+
+which doesn't appear to agree with the contents of the underlying
+relations:
+
+regression=> SELECT * FROM hobbies_r;
+name |person
+-----------+------
+posthacking|mike
+posthacking|jeff
+basketball |joe
+basketball |sally
+skywalking |
+(5 rows)
+
+regression=> SELECT * FROM equipment_r;
+name |hobby
+-------------+-----------
+advil |posthacking
+peet's coffee|posthacking
+hightops |basketball
+guts |skywalking
+(4 rows)
+
+I'd have expected an output along the lines of
+
+advil |posthacking|mike
+peet's coffee|posthacking|mike
+hightops |basketball |joe
+hightops |basketball |sally
+
+Is the regression test's expected output wrong, or am I misunderstanding
+what this query is supposed to do? Is there any documentation anywhere
+about how SQL functions returning multiple tuples are supposed to
+behave?
+
+ regards, tom lane
+
+************
+
+
+From owner-pgsql-hackers@hub.org Thu Sep 23 11:03:19 1999
+Received: from hub.org (hub.org [216.126.84.1])
+ by candle.pha.pa.us (8.9.0/8.9.0) with ESMTP id LAA16211
+ for <maillist@candle.pha.pa.us>; Thu, 23 Sep 1999 11:03:17 -0400 (EDT)
+Received: from hub.org (hub.org [216.126.84.1])
+ by hub.org (8.9.3/8.9.3) with ESMTP id KAA58151;
+ Thu, 23 Sep 1999 10:53:46 -0400 (EDT)
+ (envelope-from owner-pgsql-hackers@hub.org)
+Received: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Thu, 23 Sep 1999 10:53:05 +0000 (EDT)
+Received: (from majordom@localhost)
+ by hub.org (8.9.3/8.9.3) id KAA57948
+ for pgsql-hackers-outgoing; Thu, 23 Sep 1999 10:52:23 -0400 (EDT)
+ (envelope-from owner-pgsql-hackers@postgreSQL.org)
+Received: from sss.sss.pgh.pa.us (sss.pgh.pa.us [209.114.166.2])
+ by hub.org (8.9.3/8.9.3) with ESMTP id KAA57841
+ for <hackers@postgreSQL.org>; Thu, 23 Sep 1999 10:51:50 -0400 (EDT)
+ (envelope-from tgl@sss.pgh.pa.us)
+Received: from sss.sss.pgh.pa.us (localhost [127.0.0.1])
+ by sss.sss.pgh.pa.us (8.9.1/8.9.1) with ESMTP id KAA14211;
+ Thu, 23 Sep 1999 10:51:10 -0400 (EDT)
+To: Andreas Zeugswetter <andreas.zeugswetter@telecom.at>
+cc: hackers@postgreSQL.org
+Subject: Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions
+In-reply-to: Your message of Thu, 23 Sep 1999 10:07:24 +0200
+ <37E9DFBC.5C0978F@telecom.at>
+Date: Thu, 23 Sep 1999 10:51:10 -0400
+Message-ID: <14209.938098270@sss.pgh.pa.us>
+From: Tom Lane <tgl@sss.pgh.pa.us>
+Sender: owner-pgsql-hackers@postgreSQL.org
+Precedence: bulk
+Status: RO
+
+Andreas Zeugswetter <andreas.zeugswetter@telecom.at> writes:
+> That is what I use it for. I have never used it with a
+> returns setof function, but reading the comments in the regression test,
+> -- mike needs advil and peet's coffee,
+> -- joe and sally need hightops, and
+> -- everyone else is fine.
+> it looks like the results you expected are correct, and currently the
+> wrong result is given.
+
+Yes, I have concluded the same (and partially fixed it, per my previous
+message).
+
+> Those that don't have a hobbie should return name|NULL|NULL. A hobbie
+> that does'nt need equipment name|hobbie|NULL.
+
+That's a good point. Currently (both with and without my uncommitted
+fix) you get *no* rows out from ExecTargetList if there are any Iters
+that return empty result sets. It might be more reasonable to treat an
+empty result set as if it were NULL, which would give the behavior you
+suggest.
+
+This would be an easy change to my current patch, and I'm prepared to
+make it before committing what I have, if people agree that that's a
+more reasonable definition. Comments?
+
+ regards, tom lane
+
+************
+
+
+From owner-pgsql-hackers@hub.org Thu Sep 23 04:31:15 1999
+Received: from renoir.op.net (root@renoir.op.net [209.152.193.4])
+ by candle.pha.pa.us (8.9.0/8.9.0) with ESMTP id EAA11344
+ for <maillist@candle.pha.pa.us>; Thu, 23 Sep 1999 04:31:15 -0400 (EDT)
+Received: from hub.org (hub.org [216.126.84.1]) by renoir.op.net (o1/$ Revision: 1.18 $) with ESMTP id EAA05350 for <maillist@candle.pha.pa.us>; Thu, 23 Sep 1999 04:24:29 -0400 (EDT)
+Received: from hub.org (hub.org [216.126.84.1])
+ by hub.org (8.9.3/8.9.3) with ESMTP id EAA85679;
+ Thu, 23 Sep 1999 04:16:26 -0400 (EDT)
+ (envelope-from owner-pgsql-hackers@hub.org)
+Received: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Thu, 23 Sep 1999 04:09:52 +0000 (EDT)
+Received: (from majordom@localhost)
+ by hub.org (8.9.3/8.9.3) id EAA84708
+ for pgsql-hackers-outgoing; Thu, 23 Sep 1999 04:08:57 -0400 (EDT)
+ (envelope-from owner-pgsql-hackers@postgreSQL.org)
+Received: from gandalf.telecom.at (gandalf.telecom.at [194.118.26.84])
+ by hub.org (8.9.3/8.9.3) with ESMTP id EAA84632
+ for <hackers@postgresql.org>; Thu, 23 Sep 1999 04:08:03 -0400 (EDT)
+ (envelope-from andreas.zeugswetter@telecom.at)
+Received: from telecom.at (w0188000580.f000.d0188.sd.spardat.at [172.18.65.249])
+ by gandalf.telecom.at (xxx/xxx) with ESMTP id KAA195294
+ for <hackers@postgresql.org>; Thu, 23 Sep 1999 10:07:27 +0200
+Message-ID: <37E9DFBC.5C0978F@telecom.at>
+Date: Thu, 23 Sep 1999 10:07:24 +0200
+From: Andreas Zeugswetter <andreas.zeugswetter@telecom.at>
+X-Mailer: Mozilla 4.61 [en] (Win95; I)
+X-Accept-Language: en
+MIME-Version: 1.0
+To: hackers@postgreSQL.org
+Subject: Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions
+Content-Type: text/plain; charset=us-ascii
+Content-Transfer-Encoding: 7bit
+Sender: owner-pgsql-hackers@postgreSQL.org
+Precedence: bulk
+Status: RO
+
+> Is the regression test's expected output wrong, or am I
+> misunderstanding
+> what this query is supposed to do? Is there any
+> documentation anywhere
+> about how SQL functions returning multiple tuples are supposed to
+> behave?
+
+They are supposed to behave somewhat like a view.
+Not all rows are necessarily fetched.
+If used in a context that needs a single row answer,
+and the answer has multiple rows it is supposed to
+runtime elog. Like in:
+
+select * from tbl where col=funcreturningmultipleresults();
+-- this must elog
+
+while this is ok:
+select * from tbl where col in (select funcreturningmultipleresults());
+
+But the caller could only fetch the first row if he wanted.
+
+The nested notation is supposed to call the function passing it the tuple
+as the first argument. This is what can be used to "fake" a column
+onto a table (computed column).
+That is what I use it for. I have never used it with a
+returns setof function, but reading the comments in the regression test,
+-- mike needs advil and peet's coffee,
+-- joe and sally need hightops, and
+-- everyone else is fine.
+it looks like the results you expected are correct, and currently the
+wrong result is given.
+
+But I think this query could also elog whithout removing substantial
+functionality.
+
+SELECT p.name, p.hobbies.name, p.hobbies.equipment.name FROM person p;
+
+Actually for me it would be intuitive, that this query return one row per
+person, but elog on those that have more than one hobbie or a hobbie that
+needs more than one equipment. Those that don't have a hobbie should
+return name|NULL|NULL. A hobbie that does'nt need equipment name|hobbie|NULL.
+
+Andreas
+
+************
+
+
+From owner-pgsql-hackers@hub.org Wed Sep 22 22:01:07 1999
+Received: from renoir.op.net (root@renoir.op.net [209.152.193.4])
+ by candle.pha.pa.us (8.9.0/8.9.0) with ESMTP id WAA16360
+ for <maillist@candle.pha.pa.us>; Wed, 22 Sep 1999 22:01:05 -0400 (EDT)
+Received: from hub.org (hub.org [216.126.84.1]) by renoir.op.net (o1/$ Revision: 1.18 $) with ESMTP id VAA08386 for <maillist@candle.pha.pa.us>; Wed, 22 Sep 1999 21:37:24 -0400 (EDT)
+Received: from hub.org (hub.org [216.126.84.1])
+ by hub.org (8.9.3/8.9.3) with ESMTP id VAA88083;
+ Wed, 22 Sep 1999 21:28:11 -0400 (EDT)
+ (envelope-from owner-pgsql-hackers@hub.org)
+Received: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Wed, 22 Sep 1999 21:27:48 +0000 (EDT)
+Received: (from majordom@localhost)
+ by hub.org (8.9.3/8.9.3) id VAA87938
+ for pgsql-hackers-outgoing; Wed, 22 Sep 1999 21:26:52 -0400 (EDT)
+ (envelope-from owner-pgsql-hackers@postgreSQL.org)
+Received: from orion.SAPserv.Hamburg.dsh.de (Tpolaris2.sapham.debis.de [53.2.131.8])
+ by hub.org (8.9.3/8.9.3) with SMTP id VAA87909
+ for <pgsql-hackers@postgresql.org>; Wed, 22 Sep 1999 21:26:36 -0400 (EDT)
+ (envelope-from wieck@debis.com)
+Received: by orion.SAPserv.Hamburg.dsh.de
+ for pgsql-hackers@postgresql.org
+ id m11TxXw-0003kLC; Thu, 23 Sep 99 03:19 MET DST
+Message-Id: <m11TxXw-0003kLC@orion.SAPserv.Hamburg.dsh.de>
+From: wieck@debis.com (Jan Wieck)
+Subject: Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions
+To: tgl@sss.pgh.pa.us (Tom Lane)
+Date: Thu, 23 Sep 1999 03:19:39 +0200 (MET DST)
+Cc: pgsql-hackers@postgreSQL.org
+Reply-To: wieck@debis.com (Jan Wieck)
+In-Reply-To: <6408.938045139@sss.pgh.pa.us> from "Tom Lane" at Sep 22, 99 08:05:39 pm
+X-Mailer: ELM [version 2.4 PL25]
+Content-Type: text
+Sender: owner-pgsql-hackers@postgreSQL.org
+Precedence: bulk
+Status: RO
+
+Tom Lane wrote:
+
+> [...]
+>
+> What I am wondering, though, is whether this addition is actually
+> necessary, or is it a bug that the functions aren't run to completion
+> in the first place? I don't really understand the semantics of this
+> "nested dot notation". I suppose it is a Berkeleyism; I can't find
+> anything about it in the SQL92 document. The test cases shown in the
+> misc regress test seem peculiar, not to say wrong. For example:
+>
+> [...]
+>
+> Is the regression test's expected output wrong, or am I misunderstanding
+> what this query is supposed to do? Is there any documentation anywhere
+> about how SQL functions returning multiple tuples are supposed to
+> behave?
+
+ I've said some time (maybe too long) ago, that SQL functions
+ returning tuple sets are broken in general. This nested dot
+ notation (which I think is an artefact from the postquel
+ querylanguage) is implemented via set functions.
+
+ Set functions have total different semantics from all other
+ functions. First they don't really return a tuple set as
+ someone might think - all that screwed up code instead
+ simulates that they return something you could consider a
+ scan of the last SQL statement in the function. Then, on
+ each subsequent call inside of the same command, they return
+ a "tupletable slot" containing the next found tuple (that's
+ why their Func node is mangled up after the first call).
+
+ Second they have a targetlist what I think was originally
+ intended to extract attributes out of the tuples returned
+ when the above scan is asked to get the next tuple. But as I
+ read the code it invokes the function again and this might
+ cause the resource leakage you see.
+
+ Third, all this seems to never have been implemented
+ (thought?) to the end. A targetlist doesn't make sense at
+ this place because it could at max contain a single attribute
+ - so a single attno would have the same power. And if set
+ functions could appear in the rangetable (FROM clause), than
+ they would be treated as that and regular Var nodes in the
+ query would do it.
+
+ I think you shouldn't really care for that regression test
+ and maybe we should disable set functions until we really
+ implement stored procedures returning sets in the rangetable.
+
+ Set functions where planned by Stonebraker's team as
+ something that today is called stored procedures. But AFAIK
+ they never reached the useful state because even in Postgres
+ 4.2 you haven't been able to get more than one attribute out
+ of a set function. It was a feature of the postquel
+ querylanguage that you could get one attribute from a set
+ function via
+
+ RETRIEVE (attributename(setfuncname()))
+
+ While working on the constraint triggers I've came across
+ another regression test (triggers :-) that's errorneous too.
+ The funny_dup17 trigger proc executes an INSERT into the same
+ relation where it get fired for by a previous INSERT. And it
+ stops this recursion only if it reaches a nesting level of
+ 17, which could only occur if it is fired DURING the
+ execution of it's own SPI_exec(). After Vadim quouted some
+ SQL92 definitions about when constraint checks and triggers
+ are to be executed, I decided to fire regular triggers at the
+ end of a query too. Thus, there is absolutely no nesting
+ possible for AFTER triggers resulting in an endless loop.
+
+
+Jan
+
+--
+
+#======================================================================#
+# It's easier to get forgiveness for being wrong than for being right. #
+# Let's break this rule - forgive me. #
+#========================================= wieck@debis.com (Jan Wieck) #
+
+
+
+************
+
+
+From owner-pgsql-hackers@hub.org Thu Sep 23 11:01:06 1999
+Received: from renoir.op.net (root@renoir.op.net [209.152.193.4])
+ by candle.pha.pa.us (8.9.0/8.9.0) with ESMTP id LAA16162
+ for <maillist@candle.pha.pa.us>; Thu, 23 Sep 1999 11:01:04 -0400 (EDT)
+Received: from hub.org (hub.org [216.126.84.1]) by renoir.op.net (o1/$ Revision: 1.18 $) with ESMTP id KAA28544 for <maillist@candle.pha.pa.us>; Thu, 23 Sep 1999 10:45:54 -0400 (EDT)
+Received: from hub.org (hub.org [216.126.84.1])
+ by hub.org (8.9.3/8.9.3) with ESMTP id KAA52943;
+ Thu, 23 Sep 1999 10:20:51 -0400 (EDT)
+ (envelope-from owner-pgsql-hackers@hub.org)
+Received: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Thu, 23 Sep 1999 10:19:58 +0000 (EDT)
+Received: (from majordom@localhost)
+ by hub.org (8.9.3/8.9.3) id KAA52472
+ for pgsql-hackers-outgoing; Thu, 23 Sep 1999 10:19:03 -0400 (EDT)
+ (envelope-from owner-pgsql-hackers@postgreSQL.org)
+Received: from sss.sss.pgh.pa.us (sss.pgh.pa.us [209.114.166.2])
+ by hub.org (8.9.3/8.9.3) with ESMTP id KAA52431
+ for <pgsql-hackers@postgresql.org>; Thu, 23 Sep 1999 10:18:47 -0400 (EDT)
+ (envelope-from tgl@sss.pgh.pa.us)
+Received: from sss.sss.pgh.pa.us (localhost [127.0.0.1])
+ by sss.sss.pgh.pa.us (8.9.1/8.9.1) with ESMTP id KAA13253;
+ Thu, 23 Sep 1999 10:18:02 -0400 (EDT)
+To: wieck@debis.com (Jan Wieck)
+cc: pgsql-hackers@postgreSQL.org
+Subject: Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions
+In-reply-to: Your message of Thu, 23 Sep 1999 03:19:39 +0200 (MET DST)
+ <m11TxXw-0003kLC@orion.SAPserv.Hamburg.dsh.de>
+Date: Thu, 23 Sep 1999 10:18:01 -0400
+Message-ID: <13251.938096281@sss.pgh.pa.us>
+From: Tom Lane <tgl@sss.pgh.pa.us>
+Sender: owner-pgsql-hackers@postgreSQL.org
+Precedence: bulk
+Status: RO
+
+wieck@debis.com (Jan Wieck) writes:
+> Tom Lane wrote:
+>> What I am wondering, though, is whether this addition is actually
+>> necessary, or is it a bug that the functions aren't run to completion
+>> in the first place?
+
+> I've said some time (maybe too long) ago, that SQL functions
+> returning tuple sets are broken in general.
+
+Indeed they are. Try this on for size (using the regression database):
+
+ SELECT p.name, p.hobbies.equipment.name FROM person p;
+ SELECT p.hobbies.equipment.name, p.name FROM person p;
+
+You get different result sets!?
+
+The problem in this example is that ExecTargetList returns the isDone
+flag from the last targetlist entry, regardless of whether there are
+incomplete iterations in previous entries. More generally, the buffer
+leak problem that I started with only occurs if some Iter nodes are not
+run to completion --- but execQual.c has no mechanism to make sure that
+they have all reached completion simultaneously.
+
+What we really need to make functions-returning-sets work properly is
+an implementation somewhat like aggregate functions. We need to make
+a list of all the Iter nodes present in a targetlist and cycle through
+the values returned by each in a methodical fashion (run the rightmost
+through its full cycle, then advance the next-to-rightmost one value,
+run the rightmost through its cycle again, etc etc). Also there needs
+to be an understanding of the hierarchy when an Iter appears in the
+arguments of another Iter's function. (You cycle the upper one for
+*each* set of arguments created by cycling its sub-Iters.)
+
+I am not particularly interested in working on this feature right now,
+since AFAIK it's a Berkeleyism not found in SQL92. What I've done
+is to hack ExecTargetList so that it behaves semi-sanely when there's
+more than one Iter at the top level of the target list --- it still
+doesn't really give the right answer, but at least it will keep
+generating tuples until all the Iters are done at the same time.
+It happens that that's enough to give correct answers for the examples
+shown in the misc regress test. Even when it fails to generate all
+the possible combinations, there will be no buffer leaks.
+
+So, I'm going to declare victory and go home ;-). We ought to add a
+TODO item along the lines of
+ * Functions returning sets don't really work right
+in hopes that someone will feel like tackling this someday.
+
+ regards, tom lane
+
+************
+
+
#
# Copyright (c) 1994, Regents of the University of California
#
-# $Header: /cvsroot/pgsql/src/bin/pg_dump/Makefile,v 1.17 2000/07/03 16:35:39 petere Exp $
+# $Header: /cvsroot/pgsql/src/bin/pg_dump/Makefile,v 1.18 2000/07/04 14:25:26 momjian Exp $
#
#-------------------------------------------------------------------------
top_builddir = ../../..
include ../../Makefile.global
-OBJS= pg_dump.o common.o $(STRDUP)
+OBJS= pg_backup_archiver.o pg_backup_custom.o pg_backup_files.o \
+ pg_backup_plain_text.o $(STRDUP)
CFLAGS+= -I$(LIBPQDIR)
+LDFLAGS+= -lz
+all: submake pg_dump$(X) pg_restore$(X)
-all: submake pg_dump pg_dumpall
+pg_dump$(X): pg_dump.o common.o $(OBJS) $(LIBPQDIR)/libpq.a
+ $(CC) $(CFLAGS) -o $@ pg_dump.o common.o $(OBJS) $(LIBPQ) $(LDFLAGS)
-pg_dump: $(OBJS) $(LIBPQDIR)/libpq.a
- $(CC) $(CFLAGS) -o $@ $(OBJS) $(LIBPQ) $(LDFLAGS)
-
-pg_dumpall: pg_dumpall.sh
- sed -e 's:__VERSION__:$(VERSION):g' \
- -e 's:__MULTIBYTE__:$(MULTIBYTE):g' \
- -e 's:__bindir__:$(bindir):g' \
- < $< > $@
+pg_restore$(X): pg_restore.o $(OBJS) $(LIBPQDIR)/libpq.a
+ $(CC) $(CFLAGS) -o $@ pg_restore.o $(OBJS) $(LIBPQ) $(LDFLAGS)
../../utils/strdup.o:
$(MAKE) -C ../../utils strdup.o
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) $(bindir)/pg_dump$(X)
+ $(INSTALL_PROGRAM) pg_restore$(X) $(bindir)/pg_restore$(X)
$(INSTALL_SCRIPT) pg_dumpall $(bindir)/pg_dumpall
$(INSTALL_SCRIPT) pg_upgrade $(bindir)/pg_upgrade
$(CC) -MM $(CFLAGS) *.c >depend
clean distclean maintainer-clean:
- rm -f pg_dump$(X) $(OBJS) pg_dumpall
+ rm -f pg_dump$(X) pg_restore$(X) $(OBJS) pg_dump.o common.o pg_restore.o
ifeq (depend,$(wildcard depend))
include depend
--- /dev/null
+Notes on pg_dump
+================
+
+pg_dump, by default, still outputs text files.
+
+pg_dumpall forces all pg_dump output to be text, since it also outputs text into the same output stream.
+
+The plain text output format can not be used as input into pg_restore.
+
+
+To dump a database into the next custom format, type:
+
+ pg_dump <db-name> -Fc > <backup-file>
+
+To restore, try
+
+ To list contents:
+
+ pg_restore -l <backup-file> | less
+
+ or to list tables:
+
+ pg_restore <backup-file> --table | less
+
+ or to list in a differnet orderL
+
+ pg_restore <backup-file> -l --oid --rearrange | less
+
+Once you are happy with the list, just remove the '-l', and an SQL script will be output.
+
+
+You can also dump a listing:
+
+ pg_restore -l <backup-file> > toc.lis
+ or
+ pg_restore -l <backup-file> -f toc.lis
+
+edit it, and rearrange the lines (or delete some):
+
+ vi toc.lis
+
+then use it to restore selected items:
+
+ pg_restore <backup-file> --use=toc.lis -l | less
+
+When you like the list, type
+
+ pg_restore backup.bck --use=toc.lis > script.sql
+
+or, simply:
+
+ createdb newdbname
+ pg_restore backup.bck --use=toc.lis | psql newdbname
+
+
+Philip Warner, 3-Jul-2000
+pjw@rhyme.com.au
+
+
+
*
*
* IDENTIFICATION
- * $Header: /cvsroot/pgsql/src/bin/pg_dump/common.c,v 1.43 2000/06/14 18:17:50 petere Exp $
+ * $Header: /cvsroot/pgsql/src/bin/pg_dump/common.c,v 1.44 2000/07/04 14:25:27 momjian Exp $
*
* Modifications - 6/12/96 - dave@bensoft.com - version 1.13.dhb.2
*
*/
TableInfo *
-dumpSchema(FILE *fout,
- int *numTablesPtr,
- const char *tablename,
- const bool aclsSkip)
+dumpSchema(Archive *fout,
+ int *numTablesPtr,
+ const char *tablename,
+ const bool aclsSkip,
+ const bool oids,
+ const bool schemaOnly,
+ const bool dataOnly)
{
int numTypes;
int numFuncs;
g_comment_start, g_comment_end);
flagInhAttrs(tblinfo, numTables, inhinfo, numInherits);
- if (!tablename && fout)
+ if (!tablename && !dataOnly)
{
if (g_verbose)
fprintf(stderr, "%s dumping out database comment %s\n",
dumpTypes(fout, finfo, numFuncs, tinfo, numTypes);
}
- if (fout)
- {
- if (g_verbose)
- fprintf(stderr, "%s dumping out tables %s\n",
- g_comment_start, g_comment_end);
- dumpTables(fout, tblinfo, numTables, inhinfo, numInherits,
- tinfo, numTypes, tablename, aclsSkip);
- }
+ if (g_verbose)
+ fprintf(stderr, "%s dumping out tables %s\n",
+ g_comment_start, g_comment_end);
+ dumpTables(fout, tblinfo, numTables, inhinfo, numInherits,
+ tinfo, numTypes, tablename, aclsSkip, oids, schemaOnly, dataOnly);
- if (!tablename && fout)
+ if (!tablename && !dataOnly)
{
if (g_verbose)
fprintf(stderr, "%s dumping out user-defined procedural languages %s\n",
dumpProcLangs(fout, finfo, numFuncs, tinfo, numTypes);
}
- if (!tablename && fout)
+ if (!tablename && !dataOnly)
{
if (g_verbose)
fprintf(stderr, "%s dumping out user-defined functions %s\n",
dumpFuncs(fout, finfo, numFuncs, tinfo, numTypes);
}
- if (!tablename && fout)
+ if (!tablename && !dataOnly)
{
if (g_verbose)
fprintf(stderr, "%s dumping out user-defined aggregates %s\n",
dumpAggs(fout, agginfo, numAggregates, tinfo, numTypes);
}
- if (!tablename && fout)
+ if (!tablename && !dataOnly)
{
if (g_verbose)
fprintf(stderr, "%s dumping out user-defined operators %s\n",
*/
extern void
-dumpSchemaIdx(FILE *fout, const char *tablename,
+dumpSchemaIdx(Archive *fout, const char *tablename,
TableInfo *tblinfo, int numTables)
{
int numIndices;
--- /dev/null
+/*-------------------------------------------------------------------------\r
+ *\r
+ * pg_backup.h\r
+ *\r
+ * Public interface to the pg_dump archiver routines.\r
+ *\r
+ * See the headers to pg_restore for more details.\r
+ *\r
+ * Copyright (c) 2000, Philip Warner\r
+ * Rights are granted to use this software in any way so long\r
+ * as this notice is not removed.\r
+ *\r
+ * The author is not responsible for loss or damages that may\r
+ * result from it's use.\r
+ *\r
+ *\r
+ * IDENTIFICATION\r
+ *\r
+ * Modifications - 28-Jun-2000 - pjw@rhyme.com.au\r
+ *\r
+ * Initial version. \r
+ *\r
+ *-------------------------------------------------------------------------\r
+ */\r
+\r
+#ifndef PG_BACKUP__\r
+\r
+#include "config.h"\r
+#include "c.h"\r
+\r
+#define PG_BACKUP__\r
+\r
+typedef enum _archiveFormat {\r
+ archUnknown = 0,\r
+ archCustom = 1,\r
+ archFiles = 2,\r
+ archTar = 3,\r
+ archPlainText = 4\r
+} ArchiveFormat;\r
+\r
+/*\r
+ * We may want to have so user-readbale data, but in the mean\r
+ * time this gives us some abstraction and type checking.\r
+ */\r
+typedef struct _Archive {\r
+ /* Nothing here */\r
+} Archive;\r
+\r
+typedef int (*DataDumperPtr)(Archive* AH, char* oid, void* userArg);\r
+\r
+typedef struct _restoreOptions {\r
+ int dataOnly;\r
+ int dropSchema;\r
+ char *filename;\r
+ int schemaOnly;\r
+ int verbose;\r
+ int aclsSkip;\r
+ int tocSummary;\r
+ char *tocFile;\r
+ int oidOrder;\r
+ int origOrder;\r
+ int rearrange;\r
+ int format;\r
+ char *formatName;\r
+\r
+ int selTypes;\r
+ int selIndex;\r
+ int selFunction;\r
+ int selTrigger;\r
+ int selTable;\r
+ char *indexNames;\r
+ char *functionNames;\r
+ char *tableNames;\r
+ char *triggerNames;\r
+\r
+ int *idWanted;\r
+ int limitToList;\r
+ int compression;\r
+\r
+} RestoreOptions;\r
+\r
+/*\r
+ * Main archiver interface.\r
+ */\r
+\r
+/* Called to add a TOC entry */\r
+extern void ArchiveEntry(Archive* AH, const char* oid, const char* name,\r
+ const char* desc, const char* (deps[]), const char* defn,\r
+ const char* dropStmt, const char* owner, \r
+ DataDumperPtr dumpFn, void* dumpArg);\r
+\r
+/* Called to write *data* to the archive */\r
+extern int WriteData(Archive* AH, const void* data, int dLen);\r
+\r
+extern void CloseArchive(Archive* AH);\r
+\r
+extern void RestoreArchive(Archive* AH, RestoreOptions *ropt);\r
+\r
+/* Open an existing archive */\r
+extern Archive* OpenArchive(const char* FileSpec, ArchiveFormat fmt);\r
+\r
+/* Create a new archive */\r
+extern Archive* CreateArchive(const char* FileSpec, ArchiveFormat fmt, int compression);\r
+\r
+/* The --list option */\r
+extern void PrintTOCSummary(Archive* AH, RestoreOptions *ropt);\r
+\r
+extern RestoreOptions* NewRestoreOptions(void);\r
+\r
+/* Rearrange TOC entries */\r
+extern void MoveToStart(Archive* AH, char *oType);\r
+extern void MoveToEnd(Archive* AH, char *oType); \r
+extern void SortTocByOID(Archive* AH);\r
+extern void SortTocByID(Archive* AH);\r
+extern void SortTocFromFile(Archive* AH, RestoreOptions *ropt);\r
+\r
+/* Convenience functions used only when writing DATA */\r
+extern int archputs(const char *s, Archive* AH);\r
+extern int archputc(const char c, Archive* AH);\r
+extern int archprintf(Archive* AH, const char *fmt, ...);\r
+\r
+#endif\r
+\r
+\r
+\r
--- /dev/null
+/*-------------------------------------------------------------------------
+ *
+ * pg_backup_archiver.c
+ *
+ * Private implementation of the archiver routines.
+ *
+ * See the headers to pg_restore for more details.
+ *
+ * Copyright (c) 2000, Philip Warner
+ * Rights are granted to use this software in any way so long
+ * as this notice is not removed.
+ *
+ * The author is not responsible for loss or damages that may
+ * result from it's use.
+ *
+ *
+ * IDENTIFICATION
+ *
+ * Modifications - 28-Jun-2000 - pjw@rhyme.com.au
+ *
+ * Initial version.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "pg_backup.h"
+#include "pg_backup_archiver.h"
+#include <string.h>
+#include <unistd.h> /* for dup */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdarg.h>
+
+static void _SortToc(ArchiveHandle* AH, TocSortCompareFn fn);
+static int _tocSortCompareByOIDNum(const void *p1, const void *p2);
+static int _tocSortCompareByIDNum(const void *p1, const void *p2);
+static ArchiveHandle* _allocAH(const char* FileSpec, ArchiveFormat fmt,
+ int compression, ArchiveMode mode);
+static int _printTocEntry(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt);
+static int _tocEntryRequired(TocEntry* te, RestoreOptions *ropt);
+static void _disableTriggers(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
+static void _enableTriggers(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
+static TocEntry* _getTocEntry(ArchiveHandle* AH, int id);
+static void _moveAfter(ArchiveHandle* AH, TocEntry* pos, TocEntry* te);
+static void _moveBefore(ArchiveHandle* AH, TocEntry* pos, TocEntry* te);
+static int _discoverArchiveFormat(ArchiveHandle* AH);
+static char *progname = "Archiver";
+
+/*
+ * Wrapper functions.
+ *
+ * The objective it to make writing new formats and dumpers as simple
+ * as possible, if necessary at the expense of extra function calls etc.
+ *
+ */
+
+
+/* Create a new archive */
+/* Public */
+Archive* CreateArchive(const char* FileSpec, ArchiveFormat fmt, int compression)
+{
+ ArchiveHandle* AH = _allocAH(FileSpec, fmt, compression, archModeWrite);
+ return (Archive*)AH;
+}
+
+/* Open an existing archive */
+/* Public */
+Archive* OpenArchive(const char* FileSpec, ArchiveFormat fmt)
+{
+ ArchiveHandle* AH = _allocAH(FileSpec, fmt, 0, archModeRead);
+ return (Archive*)AH;
+}
+
+/* Public */
+void CloseArchive(Archive* AHX)
+{
+ ArchiveHandle* AH = (ArchiveHandle*)AHX;
+ (*AH->ClosePtr)(AH);
+
+ /* Close the output */
+ if (AH->gzOut)
+ GZCLOSE(AH->OF);
+ else if (AH->OF != stdout)
+ fclose(AH->OF);
+}
+
+/* Public */
+void RestoreArchive(Archive* AHX, RestoreOptions *ropt)
+{
+ ArchiveHandle* AH = (ArchiveHandle*) AHX;
+ TocEntry *te = AH->toc->next;
+ int reqs;
+ OutputContext sav;
+
+ if (ropt->filename || ropt->compression)
+ sav = SetOutput(AH, ropt->filename, ropt->compression);
+
+ ahprintf(AH, "--\n-- Selected TOC Entries:\n--\n");
+
+ /* Drop the items at the start, in reverse order */
+ if (ropt->dropSchema) {
+ te = AH->toc->prev;
+ while (te != AH->toc) {
+ reqs = _tocEntryRequired(te, ropt);
+ if ( (reqs & 1) && te->dropStmt) { /* We want the schema */
+ ahprintf(AH, "%s", te->dropStmt);
+ }
+ te = te->prev;
+ }
+ }
+
+ te = AH->toc->next;
+ while (te != AH->toc) {
+ reqs = _tocEntryRequired(te, ropt);
+
+ if (reqs & 1) /* We want the schema */
+ _printTocEntry(AH, te, ropt);
+
+ if (AH->PrintTocDataPtr != NULL && (reqs & 2) != 0) {
+#ifndef HAVE_ZLIB
+ if (AH->compression != 0)
+ die_horribly("%s: Unable to restore data from a compressed archive\n", progname);
+#endif
+ _disableTriggers(AH, te, ropt);
+ (*AH->PrintTocDataPtr)(AH, te, ropt);
+ _enableTriggers(AH, te, ropt);
+ }
+ te = te->next;
+ }
+
+ if (ropt->filename)
+ ResetOutput(AH, sav);
+
+}
+
+RestoreOptions* NewRestoreOptions(void)
+{
+ RestoreOptions* opts;
+
+ opts = (RestoreOptions*)calloc(1, sizeof(RestoreOptions));
+
+ opts->format = archUnknown;
+
+ return opts;
+}
+
+static void _disableTriggers(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
+{
+ ahprintf(AH, "-- Disable triggers\n");
+ ahprintf(AH, "UPDATE \"pg_class\" SET \"reltriggers\" = 0 WHERE \"relname\" !~ '^pg_';\n\n");
+}
+
+static void _enableTriggers(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
+{
+ ahprintf(AH, "-- Enable triggers\n");
+ ahprintf(AH, "BEGIN TRANSACTION;\n");
+ ahprintf(AH, "CREATE TEMP TABLE \"tr\" (\"tmp_relname\" name, \"tmp_reltriggers\" smallint);\n");
+ ahprintf(AH, "INSERT INTO \"tr\" SELECT C.\"relname\", count(T.\"oid\") FROM \"pg_class\" C,"
+ " \"pg_trigger\" T WHERE C.\"oid\" = T.\"tgrelid\" AND C.\"relname\" !~ '^pg_' "
+ " GROUP BY 1;\n");
+ ahprintf(AH, "UPDATE \"pg_class\" SET \"reltriggers\" = TMP.\"tmp_reltriggers\" "
+ "FROM \"tr\" TMP WHERE "
+ "\"pg_class\".\"relname\" = TMP.\"tmp_relname\";\n");
+ ahprintf(AH, "DROP TABLE \"tr\";\n");
+ ahprintf(AH, "COMMIT TRANSACTION;\n\n");
+}
+
+
+/*
+ * This is a routine that is available to pg_dump, hence the 'Archive*' parameter.
+ */
+
+/* Public */
+int WriteData(Archive* AHX, const void* data, int dLen)
+{
+ ArchiveHandle* AH = (ArchiveHandle*)AHX;
+
+ return (*AH->WriteDataPtr)(AH, data, dLen);
+}
+
+/*
+ * Create a new TOC entry. The TOC was designed as a TOC, but is now the
+ * repository for all metadata. But the name has stuck.
+ */
+
+/* Public */
+void ArchiveEntry(Archive* AHX, const char* oid, const char* name,
+ const char* desc, const char* (deps[]), const char* defn,
+ const char* dropStmt, const char* owner,
+ DataDumperPtr dumpFn, void* dumpArg)
+{
+ ArchiveHandle* AH = (ArchiveHandle*)AHX;
+ TocEntry* newToc;
+
+ AH->lastID++;
+ AH->tocCount++;
+
+ newToc = (TocEntry*)malloc(sizeof(TocEntry));
+ if (!newToc)
+ die_horribly("Archiver: unable to allocate memory for TOC entry\n");
+
+ newToc->prev = AH->toc->prev;
+ newToc->next = AH->toc;
+ AH->toc->prev->next = newToc;
+ AH->toc->prev = newToc;
+
+ newToc->id = AH->lastID;
+ newToc->oid = strdup(oid);
+ newToc->oidVal = atoi(oid);
+ newToc->name = strdup(name);
+ newToc->desc = strdup(desc);
+ newToc->defn = strdup(defn);
+ newToc->dropStmt = strdup(dropStmt);
+ newToc->owner = strdup(owner);
+ newToc->printed = 0;
+ newToc->formatData = NULL;
+ newToc->dataDumper = dumpFn,
+ newToc->dataDumperArg = dumpArg;
+
+ newToc->hadDumper = dumpFn ? 1 : 0;
+
+ if (AH->ArchiveEntryPtr != NULL) {
+ (*AH->ArchiveEntryPtr)(AH, newToc);
+ }
+
+ /* printf("New toc owned by '%s', oid %d\n", newToc->owner, newToc->oidVal); */
+}
+
+/* Public */
+void PrintTOCSummary(Archive* AHX, RestoreOptions *ropt)
+{
+ ArchiveHandle* AH = (ArchiveHandle*) AHX;
+ TocEntry *te = AH->toc->next;
+ OutputContext sav;
+
+ if (ropt->filename)
+ sav = SetOutput(AH, ropt->filename, ropt->compression);
+
+ ahprintf(AH, ";\n; Selected TOC Entries:\n;\n");
+
+ while (te != AH->toc) {
+ if (_tocEntryRequired(te, ropt) != 0)
+ ahprintf(AH, "%d; %d %s %s %s\n", te->id, te->oidVal, te->desc, te->name, te->owner);
+ te = te->next;
+ }
+
+ if (ropt->filename)
+ ResetOutput(AH, sav);
+}
+
+/***********
+ * Sorting and Reordering
+ ***********/
+
+/*
+ * Move TOC entries of the specified type to the START of the TOC.
+ */
+/* Public */
+void MoveToStart(Archive* AHX, char *oType)
+{
+ ArchiveHandle* AH = (ArchiveHandle*)AHX;
+ TocEntry *te = AH->toc->next;
+ TocEntry *newTe;
+
+ while (te != AH->toc) {
+ te->_moved = 0;
+ te = te->next;
+ }
+
+ te = AH->toc->prev;
+ while (te != AH->toc && !te->_moved) {
+ newTe = te->prev;
+ if (strcmp(te->desc, oType) == 0) {
+ _moveAfter(AH, AH->toc, te);
+ }
+ te = newTe;
+ }
+}
+
+
+/*
+ * Move TOC entries of the specified type to the end of the TOC.
+ */
+/* Public */
+void MoveToEnd(Archive* AHX, char *oType)
+{
+ ArchiveHandle* AH = (ArchiveHandle*)AHX;
+ TocEntry *te = AH->toc->next;
+ TocEntry *newTe;
+
+ while (te != AH->toc) {
+ te->_moved = 0;
+ te = te->next;
+ }
+
+ te = AH->toc->next;
+ while (te != AH->toc && !te->_moved) {
+ newTe = te->next;
+ if (strcmp(te->desc, oType) == 0) {
+ _moveBefore(AH, AH->toc, te);
+ }
+ te = newTe;
+ }
+}
+
+/*
+ * Sort TOC by OID
+ */
+/* Public */
+void SortTocByOID(Archive* AHX)
+{
+ ArchiveHandle* AH = (ArchiveHandle*)AHX;
+ _SortToc(AH, _tocSortCompareByOIDNum);
+}
+
+/*
+ * Sort TOC by ID
+ */
+/* Public */
+void SortTocByID(Archive* AHX)
+{
+ ArchiveHandle* AH = (ArchiveHandle*)AHX;
+ _SortToc(AH, _tocSortCompareByIDNum);
+}
+
+void SortTocFromFile(Archive* AHX, RestoreOptions *ropt)
+{
+ ArchiveHandle* AH = (ArchiveHandle*)AHX;
+ FILE *fh;
+ char buf[1024];
+ char *cmnt;
+ char *endptr;
+ int id;
+ TocEntry *te;
+ TocEntry *tePrev;
+ int i;
+
+ /* Allocate space for the 'wanted' array, and init it */
+ ropt->idWanted = (int*)malloc(sizeof(int)*AH->tocCount);
+ for ( i = 0 ; i < AH->tocCount ; i++ )
+ ropt->idWanted[i] = 0;
+
+ ropt->limitToList = 1;
+
+ /* Mark all entries as 'not moved' */
+ te = AH->toc->next;
+ while (te != AH->toc) {
+ te->_moved = 0;
+ te = te->next;
+ }
+
+ /* Set prev entry as head of list */
+ tePrev = AH->toc;
+
+ /* Setup the file */
+ fh = fopen(ropt->tocFile, PG_BINARY_R);
+ if (!fh)
+ die_horribly("%s: could not open TOC file\n", progname);
+
+ while (fgets(buf, 1024, fh) != NULL)
+ {
+ /* Find a comment */
+ cmnt = strchr(buf, ';');
+ if (cmnt == buf)
+ continue;
+
+ /* End string at comment */
+ if (cmnt != NULL)
+ cmnt[0] = '\0';
+
+ /* Skip if all spaces */
+ if (strspn(buf, " \t") == strlen(buf))
+ continue;
+
+ /* Get an ID */
+ id = strtol(buf, &endptr, 10);
+ if (endptr == buf)
+ {
+ fprintf(stderr, "%s: warning - line ignored: %s\n", progname, buf);
+ continue;
+ }
+
+ /* Find TOC entry */
+ te = _getTocEntry(AH, id);
+ if (!te)
+ die_horribly("%s: could not find entry for id %d\n",progname, id);
+
+ ropt->idWanted[id-1] = 1;
+
+ _moveAfter(AH, tePrev, te);
+ tePrev = te;
+ }
+
+ fclose(fh);
+}
+
+/**********************
+ * 'Convenience functions that look like standard IO functions
+ * for writing data when in dump mode.
+ **********************/
+
+/* Public */
+int archputs(const char *s, Archive* AH) {
+ return WriteData(AH, s, strlen(s));
+}
+
+/* Public */
+int archputc(const char c, Archive* AH) {
+ return WriteData(AH, &c, 1);
+}
+
+/* Public */
+int archprintf(Archive* AH, const char *fmt, ...)
+{
+ char *p = NULL;
+ va_list ap;
+ int bSize = strlen(fmt) + 1024;
+ int cnt = -1;
+
+ va_start(ap, fmt);
+ while (cnt < 0) {
+ if (p != NULL) free(p);
+ bSize *= 2;
+ if ((p = malloc(bSize)) == NULL)
+ {
+ va_end(ap);
+ die_horribly("%s: could not allocate buffer for archprintf\n", progname);
+ }
+ cnt = vsnprintf(p, bSize, fmt, ap);
+ }
+ va_end(ap);
+ WriteData(AH, p, cnt);
+ free(p);
+ return cnt;
+}
+
+
+/*******************************
+ * Stuff below here should be 'private' to the archiver routines
+ *******************************/
+
+OutputContext SetOutput(ArchiveHandle* AH, char *filename, int compression)
+{
+ OutputContext sav;
+#ifdef HAVE_ZLIB
+ char fmode[10];
+#endif
+ int fn = 0;
+
+ /* Replace the AH output file handle */
+ sav.OF = AH->OF;
+ sav.gzOut = AH->gzOut;
+
+ if (filename) {
+ fn = 0;
+ } else if (AH->FH) {
+ fn = fileno(AH->FH);
+ } else if (AH->fSpec) {
+ fn = 0;
+ filename = AH->fSpec;
+ } else {
+ fn = fileno(stdout);
+ }
+
+ /* If compression explicitly requested, use gzopen */
+#ifdef HAVE_ZLIB
+ if (compression != 0)
+ {
+ sprintf(fmode, "wb%d", compression);
+ if (fn) {
+ AH->OF = gzdopen(dup(fn), fmode); /* Don't use PG_BINARY_x since this is zlib */
+ } else {
+ AH->OF = gzopen(filename, fmode);
+ }
+ AH->gzOut = 1;
+ } else { /* Use fopen */
+#endif
+ if (fn) {
+ AH->OF = fdopen(dup(fn), PG_BINARY_W);
+ } else {
+ AH->OF = fopen(filename, PG_BINARY_W);
+ }
+ AH->gzOut = 0;
+#ifdef HAVE_ZLIB
+ }
+#endif
+
+ return sav;
+}
+
+void ResetOutput(ArchiveHandle* AH, OutputContext sav)
+{
+ if (AH->gzOut)
+ GZCLOSE(AH->OF);
+ else
+ fclose(AH->OF);
+
+ AH->gzOut = sav.gzOut;
+ AH->OF = sav.OF;
+}
+
+
+
+/*
+ * Print formatted text to the output file (usually stdout).
+ */
+int ahprintf(ArchiveHandle* AH, const char *fmt, ...)
+{
+ char *p = NULL;
+ va_list ap;
+ int bSize = strlen(fmt) + 1024; /* Should be enough */
+ int cnt = -1;
+
+ va_start(ap, fmt);
+ while (cnt < 0) {
+ if (p != NULL) free(p);
+ bSize *= 2;
+ p = (char*)malloc(bSize);
+ if (p == NULL)
+ {
+ va_end(ap);
+ die_horribly("%s: could not allocate buffer for ahprintf\n", progname);
+ }
+ cnt = vsnprintf(p, bSize, fmt, ap);
+ }
+ va_end(ap);
+ ahwrite(p, 1, cnt, AH);
+ free(p);
+ return cnt;
+}
+
+/*
+ * Write buffer to the output file (usually stdout).
+ */
+int ahwrite(const void *ptr, size_t size, size_t nmemb, ArchiveHandle* AH)
+{
+ if (AH->gzOut)
+ return GZWRITE((void*)ptr, size, nmemb, AH->OF);
+ else
+ return fwrite((void*)ptr, size, nmemb, AH->OF);
+}
+
+
+void die_horribly(const char *fmt, ...)
+{
+ va_list ap;
+
+ va_start(ap, fmt);
+ vfprintf(stderr, fmt, ap);
+ va_end(ap);
+ exit(1);
+}
+
+static void _moveAfter(ArchiveHandle* AH, TocEntry* pos, TocEntry* te)
+{
+ te->prev->next = te->next;
+ te->next->prev = te->prev;
+
+ te->prev = pos;
+ te->next = pos->next;
+
+ pos->next->prev = te;
+ pos->next = te;
+
+ te->_moved = 1;
+}
+
+static void _moveBefore(ArchiveHandle* AH, TocEntry* pos, TocEntry* te)
+{
+ te->prev->next = te->next;
+ te->next->prev = te->prev;
+
+ te->prev = pos->prev;
+ te->next = pos;
+ pos->prev->next = te;
+ pos->prev = te;
+
+ te->_moved = 1;
+}
+
+static TocEntry* _getTocEntry(ArchiveHandle* AH, int id)
+{
+ TocEntry *te;
+
+ te = AH->toc->next;
+ while (te != AH->toc) {
+ if (te->id == id)
+ return te;
+ te = te->next;
+ }
+ return NULL;
+}
+
+int TocIDRequired(ArchiveHandle* AH, int id, RestoreOptions *ropt)
+{
+ TocEntry *te = _getTocEntry(AH, id);
+
+ if (!te)
+ return 0;
+
+ return _tocEntryRequired(te, ropt);
+}
+
+int WriteInt(ArchiveHandle* AH, int i)
+{
+ int b;
+
+ /* This is a bit yucky, but I don't want to make the
+ * binary format very dependant on representation,
+ * and not knowing much about it, I write out a
+ * sign byte. If you change this, don't forget to change the
+ * file version #, and modify readInt to read the new format
+ * AS WELL AS the old formats.
+ */
+
+ /* SIGN byte */
+ if (i < 0) {
+ (*AH->WriteBytePtr)(AH, 1);
+ i = -i;
+ } else {
+ (*AH->WriteBytePtr)(AH, 0);
+ }
+
+ for(b = 0 ; b < AH->intSize ; b++) {
+ (*AH->WriteBytePtr)(AH, i & 0xFF);
+ i = i / 256;
+ }
+
+ return AH->intSize + 1;
+}
+
+int ReadInt(ArchiveHandle* AH)
+{
+ int res = 0;
+ int shft = 1;
+ int bv, b;
+ int sign = 0; /* Default positive */
+
+ if (AH->version > K_VERS_1_0)
+ /* Read a sign byte */
+ sign = (*AH->ReadBytePtr)(AH);
+
+ for( b = 0 ; b < AH->intSize ; b++) {
+ bv = (*AH->ReadBytePtr)(AH);
+ res = res + shft * bv;
+ shft *= 256;
+ }
+
+ if (sign)
+ res = - res;
+
+ return res;
+}
+
+int WriteStr(ArchiveHandle* AH, char* c)
+{
+ int l = WriteInt(AH, strlen(c));
+ return (*AH->WriteBufPtr)(AH, c, strlen(c)) + l;
+}
+
+char* ReadStr(ArchiveHandle* AH)
+{
+ char* buf;
+ int l;
+
+ l = ReadInt(AH);
+ buf = (char*)malloc(l+1);
+ if (!buf)
+ die_horribly("Archiver: Unable to allocate sufficient memory in ReadStr\n");
+
+ (*AH->ReadBufPtr)(AH, (void*)buf, l);
+ buf[l] = '\0';
+ return buf;
+}
+
+int _discoverArchiveFormat(ArchiveHandle* AH)
+{
+ FILE *fh;
+ char sig[6]; /* More than enough */
+ int cnt;
+ int wantClose = 0;
+
+ if (AH->fSpec) {
+ wantClose = 1;
+ fh = fopen(AH->fSpec, PG_BINARY_R);
+ } else {
+ fh = stdin;
+ }
+
+ if (!fh)
+ die_horribly("Archiver: could not open input file\n");
+
+ cnt = fread(sig, 1, 5, fh);
+
+ if (cnt != 5) {
+ fprintf(stderr, "Archiver: input file is too short, or is unreadable\n");
+ exit(1);
+ }
+
+ if (strncmp(sig, "PGDMP", 5) != 0)
+ {
+ fprintf(stderr, "Archiver: input file does not appear to be a valid archive\n");
+ exit(1);
+ }
+
+ AH->vmaj = fgetc(fh);
+ AH->vmin = fgetc(fh);
+
+ /* Check header version; varies from V1.0 */
+ if (AH->vmaj > 1 || ( (AH->vmaj == 1) && (AH->vmin > 0) ) ) /* Version > 1.0 */
+ AH->vrev = fgetc(fh);
+ else
+ AH->vrev = 0;
+
+ AH->intSize = fgetc(fh);
+ AH->format = fgetc(fh);
+
+ /* Make a convenient integer <maj><min><rev>00 */
+ AH->version = ( (AH->vmaj * 256 + AH->vmin) * 256 + AH->vrev ) * 256 + 0;
+
+ /* If we can't seek, then mark the header as read */
+ if (fseek(fh, 0, SEEK_SET) != 0)
+ AH->readHeader = 1;
+
+ /* Close the file */
+ if (wantClose)
+ fclose(fh);
+
+ return AH->format;
+
+}
+
+
+/*
+ * Allocate an archive handle
+ */
+static ArchiveHandle* _allocAH(const char* FileSpec, ArchiveFormat fmt,
+ int compression, ArchiveMode mode) {
+ ArchiveHandle* AH;
+
+ AH = (ArchiveHandle*)malloc(sizeof(ArchiveHandle));
+ if (!AH)
+ die_horribly("Archiver: Could not allocate archive handle\n");
+
+ AH->vmaj = K_VERS_MAJOR;
+ AH->vmin = K_VERS_MINOR;
+
+ AH->intSize = sizeof(int);
+ AH->lastID = 0;
+ if (FileSpec) {
+ AH->fSpec = strdup(FileSpec);
+ } else {
+ AH->fSpec = NULL;
+ }
+ AH->FH = NULL;
+ AH->formatData = NULL;
+
+ AH->currToc = NULL;
+ AH->currUser = "";
+
+ AH->toc = (TocEntry*)malloc(sizeof(TocEntry));
+ if (!AH->toc)
+ die_horribly("Archiver: Could not allocate TOC header\n");
+
+ AH->tocCount = 0;
+ AH->toc->next = AH->toc;
+ AH->toc->prev = AH->toc;
+ AH->toc->id = 0;
+ AH->toc->oid = NULL;
+ AH->toc->name = NULL; /* eg. MY_SPECIAL_FUNCTION */
+ AH->toc->desc = NULL; /* eg. FUNCTION */
+ AH->toc->defn = NULL; /* ie. sql to define it */
+ AH->toc->depOid = NULL;
+
+ AH->mode = mode;
+ AH->format = fmt;
+ AH->compression = compression;
+
+ AH->ArchiveEntryPtr = NULL;
+
+ AH->StartDataPtr = NULL;
+ AH->WriteDataPtr = NULL;
+ AH->EndDataPtr = NULL;
+
+ AH->WriteBytePtr = NULL;
+ AH->ReadBytePtr = NULL;
+ AH->WriteBufPtr = NULL;
+ AH->ReadBufPtr = NULL;
+ AH->ClosePtr = NULL;
+ AH->WriteExtraTocPtr = NULL;
+ AH->ReadExtraTocPtr = NULL;
+ AH->PrintExtraTocPtr = NULL;
+
+ AH->readHeader = 0;
+
+ /* Open stdout with no compression for AH output handle */
+ AH->gzOut = 0;
+ AH->OF = stdout;
+
+ if (fmt == archUnknown)
+ fmt = _discoverArchiveFormat(AH);
+
+ switch (fmt) {
+
+ case archCustom:
+ InitArchiveFmt_Custom(AH);
+ break;
+
+ case archFiles:
+ InitArchiveFmt_Files(AH);
+ break;
+
+ case archPlainText:
+ InitArchiveFmt_PlainText(AH);
+ break;
+
+ default:
+ die_horribly("Archiver: Unrecognized file format '%d'\n", fmt);
+ }
+
+ return AH;
+}
+
+
+void WriteDataChunks(ArchiveHandle* AH)
+{
+ TocEntry *te = AH->toc->next;
+
+ while (te != AH->toc) {
+ if (te->dataDumper != NULL) {
+ AH->currToc = te;
+ /* printf("Writing data for %d (%x)\n", te->id, te); */
+ if (AH->StartDataPtr != NULL) {
+ (*AH->StartDataPtr)(AH, te);
+ }
+
+ /* printf("Dumper arg for %d is %x\n", te->id, te->dataDumperArg); */
+ /*
+ * The user-provided DataDumper routine needs to call AH->WriteData
+ */
+ (*te->dataDumper)((Archive*)AH, te->oid, te->dataDumperArg);
+
+ if (AH->EndDataPtr != NULL) {
+ (*AH->EndDataPtr)(AH, te);
+ }
+ AH->currToc = NULL;
+ }
+ te = te->next;
+ }
+}
+
+void WriteToc(ArchiveHandle* AH)
+{
+ TocEntry *te = AH->toc->next;
+
+ /* printf("%d TOC Entries to save\n", AH->tocCount); */
+
+ WriteInt(AH, AH->tocCount);
+ while (te != AH->toc) {
+ WriteInt(AH, te->id);
+ WriteInt(AH, te->dataDumper ? 1 : 0);
+ WriteStr(AH, te->oid);
+ WriteStr(AH, te->name);
+ WriteStr(AH, te->desc);
+ WriteStr(AH, te->defn);
+ WriteStr(AH, te->dropStmt);
+ WriteStr(AH, te->owner);
+ if (AH->WriteExtraTocPtr) {
+ (*AH->WriteExtraTocPtr)(AH, te);
+ }
+ te = te->next;
+ }
+}
+
+void ReadToc(ArchiveHandle* AH)
+{
+ int i;
+
+ TocEntry *te = AH->toc->next;
+
+ AH->tocCount = ReadInt(AH);
+
+ for( i = 0 ; i < AH->tocCount ; i++) {
+
+ te = (TocEntry*)malloc(sizeof(TocEntry));
+ te->id = ReadInt(AH);
+
+ /* Sanity check */
+ if (te->id <= 0 || te->id > AH->tocCount)
+ die_horribly("Archiver: failed sanity check (bad entry id) - perhaps a corrupt TOC\n");
+
+ te->hadDumper = ReadInt(AH);
+ te->oid = ReadStr(AH);
+ te->oidVal = atoi(te->oid);
+ te->name = ReadStr(AH);
+ te->desc = ReadStr(AH);
+ te->defn = ReadStr(AH);
+ te->dropStmt = ReadStr(AH);
+ te->owner = ReadStr(AH);
+ if (AH->ReadExtraTocPtr) {
+ (*AH->ReadExtraTocPtr)(AH, te);
+ }
+ te->prev = AH->toc->prev;
+ AH->toc->prev->next = te;
+ AH->toc->prev = te;
+ te->next = AH->toc;
+ }
+}
+
+static int _tocEntryRequired(TocEntry* te, RestoreOptions *ropt)
+{
+ int res = 3; /* Data and Schema */
+
+ /* If it's an ACL, maybe ignore it */
+ if (ropt->aclsSkip && strcmp(te->desc,"ACL") == 0)
+ return 0;
+
+ /* Check if tablename only is wanted */
+ if (ropt->selTypes)
+ {
+ if ( (strcmp(te->desc, "TABLE") == 0) || (strcmp(te->desc, "TABLE DATA") == 0) )
+ {
+ if (!ropt->selTable)
+ return 0;
+ if (ropt->tableNames && strcmp(ropt->tableNames, te->name) != 0)
+ return 0;
+ } else if (strcmp(te->desc, "INDEX") == 0) {
+ if (!ropt->selIndex)
+ return 0;
+ if (ropt->indexNames && strcmp(ropt->indexNames, te->name) != 0)
+ return 0;
+ } else if (strcmp(te->desc, "FUNCTION") == 0) {
+ if (!ropt->selFunction)
+ return 0;
+ if (ropt->functionNames && strcmp(ropt->functionNames, te->name) != 0)
+ return 0;
+ } else if (strcmp(te->desc, "TRIGGER") == 0) {
+ if (!ropt->selTrigger)
+ return 0;
+ if (ropt->triggerNames && strcmp(ropt->triggerNames, te->name) != 0)
+ return 0;
+ } else {
+ return 0;
+ }
+ }
+
+ /* Mask it if we only want schema */
+ if (ropt->schemaOnly)
+ res = res & 1;
+
+ /* Mask it we only want data */
+ if (ropt->dataOnly)
+ res = res & 2;
+
+ /* Mask it if we don't have a schema contribition */
+ if (!te->defn || strlen(te->defn) == 0)
+ res = res & 2;
+
+ /* Mask it if we don't have a possible data contribition */
+ if (!te->hadDumper)
+ res = res & 1;
+
+ /* Finally, if we used a list, limit based on that as well */
+ if (ropt->limitToList && !ropt->idWanted[te->id - 1])
+ return 0;
+
+ return res;
+}
+
+static int _printTocEntry(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt)
+{
+ ahprintf(AH, "--\n-- TOC Entry ID %d (OID %s)\n--\n-- Name: %s Type: %s Owner: %s\n",
+ te->id, te->oid, te->name, te->desc, te->owner);
+ if (AH->PrintExtraTocPtr != NULL) {
+ (*AH->PrintExtraTocPtr)(AH, te);
+ }
+ ahprintf(AH, "--\n\n");
+
+ if (te->owner && strlen(te->owner) != 0 && strcmp(AH->currUser, te->owner) != 0) {
+ ahprintf(AH, "\\connect - %s\n", te->owner);
+ AH->currUser = te->owner;
+ }
+
+ ahprintf(AH, "%s\n", te->defn);
+
+ return 1;
+}
+
+void WriteHead(ArchiveHandle* AH)
+{
+ (*AH->WriteBufPtr)(AH, "PGDMP", 5); /* Magic code */
+ (*AH->WriteBytePtr)(AH, AH->vmaj);
+ (*AH->WriteBytePtr)(AH, AH->vmin);
+ (*AH->WriteBytePtr)(AH, AH->vrev);
+ (*AH->WriteBytePtr)(AH, AH->intSize);
+ (*AH->WriteBytePtr)(AH, AH->format);
+
+#ifndef HAVE_ZLIB
+ if (AH->compression != 0)
+ fprintf(stderr, "%s: WARNING - requested compression not available in this installation - "
+ "archive will be uncompressed \n", progname);
+
+ AH->compression = 0;
+ (*AH->WriteBytePtr)(AH, 0);
+
+#else
+
+ (*AH->WriteBytePtr)(AH, AH->compression);
+
+#endif
+}
+
+void ReadHead(ArchiveHandle* AH)
+{
+ char tmpMag[7];
+ int fmt;
+
+ if (AH->readHeader)
+ return;
+
+ (*AH->ReadBufPtr)(AH, tmpMag, 5);
+
+ if (strncmp(tmpMag,"PGDMP", 5) != 0)
+ die_horribly("Archiver: Did not fing magic PGDMP in file header\n");
+
+ AH->vmaj = (*AH->ReadBytePtr)(AH);
+ AH->vmin = (*AH->ReadBytePtr)(AH);
+
+ if (AH->vmaj > 1 || ( (AH->vmaj == 1) && (AH->vmin > 0) ) ) /* Version > 1.0 */
+ {
+ AH->vrev = (*AH->ReadBytePtr)(AH);
+ } else {
+ AH->vrev = 0;
+ }
+
+ AH->version = ( (AH->vmaj * 256 + AH->vmin) * 256 + AH->vrev ) * 256 + 0;
+
+
+ if (AH->version < K_VERS_1_0 || AH->version > K_VERS_MAX)
+ die_horribly("Archiver: unsupported version (%d.%d) in file header\n", AH->vmaj, AH->vmin);
+
+ AH->intSize = (*AH->ReadBytePtr)(AH);
+ if (AH->intSize > 32)
+ die_horribly("Archiver: sanity check on integer size (%d) failes\n", AH->intSize);
+
+ if (AH->intSize > sizeof(int))
+ fprintf(stderr, "\nWARNING: Backup file was made on a machine with larger integers, "
+ "some operations may fail\n");
+
+ fmt = (*AH->ReadBytePtr)(AH);
+
+ if (AH->format != fmt)
+ die_horribly("Archiver: expected format (%d) differs from format found in file (%d)\n",
+ AH->format, fmt);
+
+ if (AH->version >= K_VERS_1_2)
+ {
+ AH->compression = (*AH->ReadBytePtr)(AH);
+ } else {
+ AH->compression = Z_DEFAULT_COMPRESSION;
+ }
+
+#ifndef HAVE_ZLIB
+ fprintf(stderr, "%s: WARNING - archive is compressed - any data will not be available\n", progname);
+#endif
+
+}
+
+
+static void _SortToc(ArchiveHandle* AH, TocSortCompareFn fn)
+{
+ TocEntry** tea;
+ TocEntry* te;
+ int i;
+
+ /* Allocate an array for quicksort (TOC size + head & foot) */
+ tea = (TocEntry**)malloc(sizeof(TocEntry*) * (AH->tocCount + 2) );
+
+ /* Build array of toc entries, including header at start and end */
+ te = AH->toc;
+ for( i = 0 ; i <= AH->tocCount+1 ; i++) {
+ /* printf("%d: %x (%x, %x) - %d\n", i, te, te->prev, te->next, te->oidVal); */
+ tea[i] = te;
+ te = te->next;
+ }
+
+ /* Sort it, but ignore the header entries */
+ qsort(&(tea[1]), AH->tocCount, sizeof(TocEntry*), fn);
+
+ /* Rebuild list: this works becuase we have headers at each end */
+ for( i = 1 ; i <= AH->tocCount ; i++) {
+ tea[i]->next = tea[i+1];
+ tea[i]->prev = tea[i-1];
+ }
+
+
+ te = AH->toc;
+ for( i = 0 ; i <= AH->tocCount+1 ; i++) {
+ /* printf("%d: %x (%x, %x) - %d\n", i, te, te->prev, te->next, te->oidVal); */
+ te = te->next;
+ }
+
+
+ AH->toc->next = tea[1];
+ AH->toc->prev = tea[AH->tocCount];
+}
+
+static int _tocSortCompareByOIDNum(const void* p1, const void* p2)
+{
+ TocEntry* te1 = *(TocEntry**)p1;
+ TocEntry* te2 = *(TocEntry**)p2;
+ int id1 = te1->oidVal;
+ int id2 = te2->oidVal;
+
+ /* printf("Comparing %d to %d\n", id1, id2); */
+
+ if (id1 < id2) {
+ return -1;
+ } else if (id1 > id2) {
+ return 1;
+ } else {
+ return _tocSortCompareByIDNum(te1, te2);
+ }
+}
+
+static int _tocSortCompareByIDNum(const void* p1, const void* p2)
+{
+ TocEntry* te1 = *(TocEntry**)p1;
+ TocEntry* te2 = *(TocEntry**)p2;
+ int id1 = te1->id;
+ int id2 = te2->id;
+
+ /* printf("Comparing %d to %d\n", id1, id2); */
+
+ if (id1 < id2) {
+ return -1;
+ } else if (id1 > id2) {
+ return 1;
+ } else {
+ return 0;
+ }
+}
+
+
+
--- /dev/null
+/*-------------------------------------------------------------------------\r
+ *\r
+ * pg_backup_archiver.h\r
+ *\r
+ * Private interface to the pg_dump archiver routines.\r
+ * It is NOT intended that these routines be called by any \r
+ * dumper directly.\r
+ *\r
+ * See the headers to pg_restore for more details.\r
+ *\r
+ * Copyright (c) 2000, Philip Warner\r
+ * Rights are granted to use this software in any way so long\r
+ * as this notice is not removed.\r
+ *\r
+ * The author is not responsible for loss or damages that may\r
+ * result from it's use.\r
+ *\r
+ *\r
+ * IDENTIFICATION\r
+ *\r
+ * Modifications - 28-Jun-2000 - pjw@rhyme.com.au\r
+ *\r
+ * Initial version. \r
+ *\r
+ *-------------------------------------------------------------------------\r
+ */\r
+\r
+#ifndef __PG_BACKUP_ARCHIVE__\r
+#define __PG_BACKUP_ARCHIVE__\r
+\r
+#include <stdio.h>\r
+\r
+#ifdef HAVE_ZLIB\r
+#include <zlib.h>\r
+#define GZCLOSE(fh) gzclose(fh)\r
+#define GZWRITE(p, s, n, fh) gzwrite(fh, p, n * s)\r
+#define GZREAD(p, s, n, fh) gzread(fh, p, n * s)\r
+#else\r
+#define GZCLOSE(fh) fclose(fh)\r
+#define GZWRITE(p, s, n, fh) fwrite(p, s, n, fh)\r
+#define GZREAD(p, s, n, fh) fread(p, s, n, fh)\r
+#define Z_DEFAULT_COMPRESSION -1\r
+\r
+typedef struct _z_stream {\r
+ void *next_in;\r
+ void *next_out;\r
+ int avail_in;\r
+ int avail_out;\r
+} z_stream;\r
+typedef z_stream *z_streamp;\r
+#endif\r
+\r
+#include "pg_backup.h"\r
+\r
+#define K_VERS_MAJOR 1\r
+#define K_VERS_MINOR 2 \r
+#define K_VERS_REV 0 \r
+\r
+/* Some important version numbers (checked in code) */\r
+#define K_VERS_1_0 (( (1 * 256 + 0) * 256 + 0) * 256 + 0)\r
+#define K_VERS_1_2 (( (1 * 256 + 2) * 256 + 0) * 256 + 0)\r
+#define K_VERS_MAX (( (1 * 256 + 2) * 256 + 255) * 256 + 0)\r
+\r
+struct _archiveHandle;\r
+struct _tocEntry;\r
+struct _restoreList;\r
+\r
+typedef void (*ClosePtr) (struct _archiveHandle* AH);\r
+typedef void (*ArchiveEntryPtr) (struct _archiveHandle* AH, struct _tocEntry* te);\r
+ \r
+typedef void (*StartDataPtr) (struct _archiveHandle* AH, struct _tocEntry* te);\r
+typedef int (*WriteDataPtr) (struct _archiveHandle* AH, const void* data, int dLen);\r
+typedef void (*EndDataPtr) (struct _archiveHandle* AH, struct _tocEntry* te);\r
+\r
+typedef int (*WriteBytePtr) (struct _archiveHandle* AH, const int i);\r
+typedef int (*ReadBytePtr) (struct _archiveHandle* AH);\r
+typedef int (*WriteBufPtr) (struct _archiveHandle* AH, const void* c, int len);\r
+typedef int (*ReadBufPtr) (struct _archiveHandle* AH, void* buf, int len);\r
+typedef void (*SaveArchivePtr) (struct _archiveHandle* AH);\r
+typedef void (*WriteExtraTocPtr) (struct _archiveHandle* AH, struct _tocEntry* te);\r
+typedef void (*ReadExtraTocPtr) (struct _archiveHandle* AH, struct _tocEntry* te);\r
+typedef void (*PrintExtraTocPtr) (struct _archiveHandle* AH, struct _tocEntry* te);\r
+typedef void (*PrintTocDataPtr) (struct _archiveHandle* AH, struct _tocEntry* te, \r
+ RestoreOptions *ropt);\r
+\r
+typedef int (*TocSortCompareFn) (const void* te1, const void *te2); \r
+\r
+typedef enum _archiveMode {\r
+ archModeWrite,\r
+ archModeRead\r
+} ArchiveMode;\r
+\r
+typedef struct _outputContext {\r
+ void *OF;\r
+ int gzOut;\r
+} OutputContext;\r
+\r
+typedef struct _archiveHandle {\r
+ char vmaj; /* Version of file */\r
+ char vmin;\r
+ char vrev;\r
+ int version; /* Conveniently formatted version */\r
+\r
+ int intSize; /* Size of an integer in the archive */\r
+ ArchiveFormat format; /* Archive format */\r
+\r
+ int readHeader; /* Used if file header has been read already */\r
+\r
+ ArchiveEntryPtr ArchiveEntryPtr; /* Called for each metadata object */\r
+ StartDataPtr StartDataPtr; /* Called when table data is about to be dumped */\r
+ WriteDataPtr WriteDataPtr; /* Called to send some table data to the archive */\r
+ EndDataPtr EndDataPtr; /* Called when table data dump is finished */\r
+ WriteBytePtr WriteBytePtr; /* Write a byte to output */\r
+ ReadBytePtr ReadBytePtr; /* */\r
+ WriteBufPtr WriteBufPtr; \r
+ ReadBufPtr ReadBufPtr;\r
+ ClosePtr ClosePtr; /* Close the archive */\r
+ WriteExtraTocPtr WriteExtraTocPtr; /* Write extra TOC entry data associated with */\r
+ /* the current archive format */\r
+ ReadExtraTocPtr ReadExtraTocPtr; /* Read extr info associated with archie format */\r
+ PrintExtraTocPtr PrintExtraTocPtr; /* Extra TOC info for format */\r
+ PrintTocDataPtr PrintTocDataPtr;\r
+\r
+ int lastID; /* Last internal ID for a TOC entry */\r
+ char* fSpec; /* Archive File Spec */\r
+ FILE *FH; /* General purpose file handle */\r
+ void *OF;\r
+ int gzOut; /* Output file */\r
+\r
+ struct _tocEntry* toc; /* List of TOC entries */\r
+ int tocCount; /* Number of TOC entries */\r
+ struct _tocEntry* currToc; /* Used when dumping data */\r
+ char *currUser; /* Restore: current username in script */\r
+ int compression; /* Compression requested on open */\r
+ ArchiveMode mode; /* File mode - r or w */\r
+ void* formatData; /* Header data specific to file format */\r
+\r
+} ArchiveHandle;\r
+\r
+typedef struct _tocEntry {\r
+ struct _tocEntry* prev;\r
+ struct _tocEntry* next;\r
+ int id;\r
+ int hadDumper; /* Archiver was passed a dumper routine (used in restore) */\r
+ char* oid;\r
+ int oidVal;\r
+ char* name;\r
+ char* desc;\r
+ char* defn;\r
+ char* dropStmt;\r
+ char* owner;\r
+ char** depOid;\r
+ int printed; /* Indicates if entry defn has been dumped */\r
+ DataDumperPtr dataDumper; /* Routine to dump data for object */\r
+ void* dataDumperArg; /* Arg for above routine */\r
+ void* formatData; /* TOC Entry data specific to file format */\r
+\r
+ int _moved; /* Marker used when rearranging TOC */\r
+\r
+} TocEntry;\r
+\r
+extern void die_horribly(const char *fmt, ...);\r
+\r
+extern void WriteTOC(ArchiveHandle* AH);\r
+extern void ReadTOC(ArchiveHandle* AH);\r
+extern void WriteHead(ArchiveHandle* AH);\r
+extern void ReadHead(ArchiveHandle* AH);\r
+extern void WriteToc(ArchiveHandle* AH);\r
+extern void ReadToc(ArchiveHandle* AH);\r
+extern void WriteDataChunks(ArchiveHandle* AH);\r
+\r
+extern int TocIDRequired(ArchiveHandle* AH, int id, RestoreOptions *ropt);\r
+\r
+/*\r
+ * Mandatory routines for each supported format\r
+ */\r
+\r
+extern int WriteInt(ArchiveHandle* AH, int i);\r
+extern int ReadInt(ArchiveHandle* AH);\r
+extern char* ReadStr(ArchiveHandle* AH);\r
+extern int WriteStr(ArchiveHandle* AH, char* s);\r
+\r
+extern void InitArchiveFmt_Custom(ArchiveHandle* AH);\r
+extern void InitArchiveFmt_Files(ArchiveHandle* AH);\r
+extern void InitArchiveFmt_PlainText(ArchiveHandle* AH);\r
+\r
+extern OutputContext SetOutput(ArchiveHandle* AH, char *filename, int compression);\r
+extern void ResetOutput(ArchiveHandle* AH, OutputContext savedContext);\r
+\r
+int ahwrite(const void *ptr, size_t size, size_t nmemb, ArchiveHandle* AH);\r
+int ahprintf(ArchiveHandle* AH, const char *fmt, ...);\r
+\r
+#endif\r
--- /dev/null
+/*-------------------------------------------------------------------------\r
+ *\r
+ * pg_backup_custom.c\r
+ *\r
+ * Implements the custom output format.\r
+ *\r
+ * See the headers to pg_restore for more details.\r
+ *\r
+ * Copyright (c) 2000, Philip Warner\r
+ * Rights are granted to use this software in any way so long\r
+ * as this notice is not removed.\r
+ *\r
+ * The author is not responsible for loss or damages that may\r
+ * result from it's use.\r
+ *\r
+ *\r
+ * IDENTIFICATION\r
+ *\r
+ * Modifications - 28-Jun-2000 - pjw@rhyme.com.au\r
+ *\r
+ * Initial version. \r
+ *\r
+ *-------------------------------------------------------------------------\r
+ */\r
+\r
+#include <stdlib.h>\r
+#include "pg_backup.h"\r
+#include "pg_backup_archiver.h"\r
+\r
+extern int errno;\r
+\r
+static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te);\r
+static void _StartData(ArchiveHandle* AH, TocEntry* te);\r
+static int _WriteData(ArchiveHandle* AH, const void* data, int dLen);\r
+static void _EndData(ArchiveHandle* AH, TocEntry* te);\r
+static int _WriteByte(ArchiveHandle* AH, const int i);\r
+static int _ReadByte(ArchiveHandle* );\r
+static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len);\r
+static int _ReadBuf(ArchiveHandle* AH, void* buf, int len);\r
+static void _CloseArchive(ArchiveHandle* AH);\r
+static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt);\r
+static void _WriteExtraToc(ArchiveHandle* AH, TocEntry* te);\r
+static void _ReadExtraToc(ArchiveHandle* AH, TocEntry* te);\r
+static void _PrintExtraToc(ArchiveHandle* AH, TocEntry* te);\r
+\r
+static void _PrintData(ArchiveHandle* AH);\r
+static void _skipData(ArchiveHandle* AH);\r
+\r
+#define zlibOutSize 4096\r
+#define zlibInSize 4096\r
+\r
+typedef struct {\r
+ z_streamp zp;\r
+ char* zlibOut;\r
+ char* zlibIn;\r
+ int inSize;\r
+ int hasSeek;\r
+ int filePos;\r
+ int dataStart;\r
+} lclContext;\r
+\r
+typedef struct {\r
+ int dataPos;\r
+ int dataLen;\r
+} lclTocEntry;\r
+\r
+static int _getFilePos(ArchiveHandle* AH, lclContext* ctx);\r
+\r
+static char* progname = "Archiver(custom)";\r
+\r
+/*\r
+ * Handler functions. \r
+ */\r
+void InitArchiveFmt_Custom(ArchiveHandle* AH) \r
+{\r
+ lclContext* ctx;\r
+\r
+ /* Assuming static functions, this can be copied for each format. */\r
+ AH->ArchiveEntryPtr = _ArchiveEntry;\r
+ AH->StartDataPtr = _StartData;\r
+ AH->WriteDataPtr = _WriteData;\r
+ AH->EndDataPtr = _EndData;\r
+ AH->WriteBytePtr = _WriteByte;\r
+ AH->ReadBytePtr = _ReadByte;\r
+ AH->WriteBufPtr = _WriteBuf;\r
+ AH->ReadBufPtr = _ReadBuf;\r
+ AH->ClosePtr = _CloseArchive;\r
+ AH->PrintTocDataPtr = _PrintTocData;\r
+ AH->ReadExtraTocPtr = _ReadExtraToc;\r
+ AH->WriteExtraTocPtr = _WriteExtraToc;\r
+ AH->PrintExtraTocPtr = _PrintExtraToc;\r
+\r
+ /*\r
+ * Set up some special context used in compressing data.\r
+ */\r
+ ctx = (lclContext*)malloc(sizeof(lclContext));\r
+ if (ctx == NULL)\r
+ die_horribly("%s: Unable to allocate archive context",progname);\r
+ AH->formatData = (void*)ctx;\r
+\r
+ ctx->zp = (z_streamp)malloc(sizeof(z_stream));\r
+ if (ctx->zp == NULL)\r
+ die_horribly("%s: unable to allocate zlib stream archive context",progname);\r
+\r
+ ctx->zlibOut = (char*)malloc(zlibOutSize);\r
+ ctx->zlibIn = (char*)malloc(zlibInSize);\r
+ ctx->inSize = zlibInSize;\r
+ ctx->filePos = 0;\r
+\r
+ if (ctx->zlibOut == NULL || ctx->zlibIn == NULL)\r
+ die_horribly("%s: unable to allocate buffers in archive context",progname);\r
+\r
+ /*\r
+ * Now open the file\r
+ */\r
+ if (AH->mode == archModeWrite) {\r
+ if (AH->fSpec && strcmp(AH->fSpec,"") != 0) {\r
+ AH->FH = fopen(AH->fSpec, PG_BINARY_W);\r
+ } else {\r
+ AH->FH = stdout;\r
+ }\r
+\r
+ if (!AH)\r
+ die_horribly("%s: unable to open archive file %s",progname, AH->fSpec);\r
+\r
+ ctx->hasSeek = (fseek(AH->FH, 0, SEEK_CUR) == 0);\r
+\r
+ } else {\r
+ if (AH->fSpec && strcmp(AH->fSpec,"") != 0) {\r
+ AH->FH = fopen(AH->fSpec, PG_BINARY_R);\r
+ } else {\r
+ AH->FH = stdin;\r
+ }\r
+ if (!AH)\r
+ die_horribly("%s: unable to open archive file %s",progname, AH->fSpec);\r
+\r
+ ctx->hasSeek = (fseek(AH->FH, 0, SEEK_CUR) == 0);\r
+\r
+ ReadHead(AH);\r
+ ReadToc(AH);\r
+ ctx->dataStart = _getFilePos(AH, ctx);\r
+ }\r
+\r
+}\r
+\r
+/*\r
+ * - Start a new TOC entry\r
+*/\r
+static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te) \r
+{\r
+ lclTocEntry* ctx;\r
+\r
+ ctx = (lclTocEntry*)malloc(sizeof(lclTocEntry));\r
+ if (te->dataDumper) {\r
+ ctx->dataPos = -1;\r
+ } else {\r
+ ctx->dataPos = 0;\r
+ }\r
+ ctx->dataLen = 0;\r
+ te->formatData = (void*)ctx;\r
+\r
+}\r
+\r
+static void _WriteExtraToc(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ lclTocEntry* ctx = (lclTocEntry*)te->formatData;\r
+\r
+ WriteInt(AH, ctx->dataPos);\r
+ WriteInt(AH, ctx->dataLen);\r
+}\r
+\r
+static void _ReadExtraToc(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ lclTocEntry* ctx = (lclTocEntry*)te->formatData;\r
+\r
+ if (ctx == NULL) {\r
+ ctx = (lclTocEntry*)malloc(sizeof(lclTocEntry));\r
+ te->formatData = (void*)ctx;\r
+ }\r
+\r
+ ctx->dataPos = ReadInt( AH );\r
+ ctx->dataLen = ReadInt( AH );\r
+ \r
+}\r
+\r
+static void _PrintExtraToc(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ lclTocEntry* ctx = (lclTocEntry*)te->formatData;\r
+\r
+ ahprintf(AH, "-- Data Pos: %d (Length %d)\n", ctx->dataPos, ctx->dataLen);\r
+}\r
+\r
+static void _StartData(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ z_streamp zp = ctx->zp;\r
+ lclTocEntry* tctx = (lclTocEntry*)te->formatData;\r
+\r
+ tctx->dataPos = _getFilePos(AH, ctx);\r
+\r
+ WriteInt(AH, te->id); /* For sanity check */\r
+\r
+#ifdef HAVE_ZLIB\r
+\r
+ if (AH->compression < 0 || AH->compression > 9) {\r
+ AH->compression = Z_DEFAULT_COMPRESSION;\r
+ }\r
+\r
+ if (AH->compression != 0) {\r
+ zp->zalloc = Z_NULL;\r
+ zp->zfree = Z_NULL;\r
+ zp->opaque = Z_NULL;\r
+\r
+ if (deflateInit(zp, AH->compression) != Z_OK)\r
+ die_horribly("%s: could not initialize compression library - %s\n",progname, zp->msg);\r
+ }\r
+\r
+#else\r
+\r
+ AH->compression = 0;\r
+\r
+#endif\r
+\r
+ /* Just be paranoid - maye End is called after Start, with no Write */\r
+ zp->next_out = ctx->zlibOut;\r
+ zp->avail_out = zlibOutSize;\r
+}\r
+\r
+static int _DoDeflate(ArchiveHandle* AH, lclContext* ctx, int flush) \r
+{\r
+ z_streamp zp = ctx->zp;\r
+\r
+#ifdef HAVE_ZLIB\r
+ char* out = ctx->zlibOut;\r
+ int res = Z_OK;\r
+\r
+ if (AH->compression != 0) \r
+ {\r
+ res = deflate(zp, flush);\r
+ if (res == Z_STREAM_ERROR)\r
+ die_horribly("%s: could not compress data - %s\n",progname, zp->msg);\r
+\r
+ if ( ( (flush == Z_FINISH) && (zp->avail_out < zlibOutSize) )\r
+ || (zp->avail_out == 0) \r
+ || (zp->avail_in != 0)\r
+ ) \r
+ {\r
+ /*\r
+ * Extra paranoia: avoid zero-length chunks since a zero \r
+ * length chunk is the EOF marker. This should never happen\r
+ * but...\r
+ */\r
+ if (zp->avail_out < zlibOutSize) {\r
+ /* printf("Wrote %d byte deflated chunk\n", zlibOutSize - zp->avail_out); */\r
+ WriteInt(AH, zlibOutSize - zp->avail_out);\r
+ fwrite(out, 1, zlibOutSize - zp->avail_out, AH->FH);\r
+ ctx->filePos += zlibOutSize - zp->avail_out;\r
+ }\r
+ zp->next_out = out;\r
+ zp->avail_out = zlibOutSize;\r
+ }\r
+ } else {\r
+#endif\r
+ if (zp->avail_in > 0)\r
+ {\r
+ WriteInt(AH, zp->avail_in);\r
+ fwrite(zp->next_in, 1, zp->avail_in, AH->FH);\r
+ ctx->filePos += zp->avail_in;\r
+ zp->avail_in = 0;\r
+ } else {\r
+#ifdef HAVE_ZLIB\r
+ if (flush == Z_FINISH)\r
+ res = Z_STREAM_END;\r
+#endif\r
+ }\r
+\r
+\r
+#ifdef HAVE_ZLIB\r
+ }\r
+\r
+ return res;\r
+#else\r
+ return 1;\r
+#endif\r
+\r
+}\r
+\r
+static int _WriteData(ArchiveHandle* AH, const void* data, int dLen)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ z_streamp zp = ctx->zp;\r
+\r
+ zp->next_in = (void*)data;\r
+ zp->avail_in = dLen;\r
+\r
+ while (zp->avail_in != 0) {\r
+ /* printf("Deflating %d bytes\n", dLen); */\r
+ _DoDeflate(AH, ctx, 0);\r
+ }\r
+ return dLen;\r
+}\r
+\r
+static void _EndData(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ lclTocEntry* tctx = (lclTocEntry*) te->formatData;\r
+\r
+#ifdef HAVE_ZLIB\r
+ z_streamp zp = ctx->zp;\r
+ int res;\r
+\r
+ if (AH->compression != 0)\r
+ {\r
+ zp->next_in = NULL;\r
+ zp->avail_in = 0;\r
+\r
+ do { \r
+ /* printf("Ending data output\n"); */\r
+ res = _DoDeflate(AH, ctx, Z_FINISH);\r
+ } while (res != Z_STREAM_END);\r
+\r
+ if (deflateEnd(zp) != Z_OK)\r
+ die_horribly("%s: error closing compression stream - %s\n", progname, zp->msg);\r
+ }\r
+#endif\r
+\r
+ /* Send the end marker */\r
+ WriteInt(AH, 0);\r
+\r
+ tctx->dataLen = _getFilePos(AH, ctx) - tctx->dataPos;\r
+\r
+}\r
+\r
+/*\r
+ * Print data for a gievn TOC entry\r
+*/\r
+static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ int id;\r
+ lclTocEntry* tctx = (lclTocEntry*) te->formatData;\r
+\r
+ if (tctx->dataPos == 0) \r
+ return;\r
+\r
+ if (!ctx->hasSeek || tctx->dataPos < 0) {\r
+ id = ReadInt(AH);\r
+\r
+ while (id != te->id) {\r
+ if (TocIDRequired(AH, id, ropt) & 2)\r
+ die_horribly("%s: Dumping a specific TOC data block out of order is not supported"\r
+ " without on this input stream (fseek required)\n", progname);\r
+ _skipData(AH);\r
+ id = ReadInt(AH);\r
+ }\r
+ } else {\r
+\r
+ if (fseek(AH->FH, tctx->dataPos, SEEK_SET) != 0)\r
+ die_horribly("%s: error %d in file seek\n",progname, errno);\r
+\r
+ id = ReadInt(AH);\r
+\r
+ }\r
+\r
+ if (id != te->id)\r
+ die_horribly("%s: Found unexpected block ID (%d) when reading data - expected %d\n",\r
+ progname, id, te->id);\r
+\r
+ ahprintf(AH, "--\n-- Data for TOC Entry ID %d (OID %s) %s %s\n--\n\n",\r
+ te->id, te->oid, te->desc, te->name);\r
+\r
+ _PrintData(AH);\r
+\r
+ ahprintf(AH, "\n\n");\r
+}\r
+\r
+/*\r
+ * Print data from current file position.\r
+*/\r
+static void _PrintData(ArchiveHandle* AH)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ z_streamp zp = ctx->zp;\r
+ int blkLen;\r
+ char* in = ctx->zlibIn;\r
+ int cnt;\r
+\r
+#ifdef HAVE_ZLIB\r
+\r
+ int res;\r
+ char* out = ctx->zlibOut;\r
+\r
+ res = Z_OK;\r
+\r
+ if (AH->compression != 0) {\r
+ zp->zalloc = Z_NULL;\r
+ zp->zfree = Z_NULL;\r
+ zp->opaque = Z_NULL;\r
+\r
+ if (inflateInit(zp) != Z_OK)\r
+ die_horribly("%s: could not initialize compression library - %s\n", progname, zp->msg);\r
+ }\r
+\r
+#endif\r
+\r
+ blkLen = ReadInt(AH);\r
+ while (blkLen != 0) {\r
+ if (blkLen > ctx->inSize) {\r
+ free(ctx->zlibIn);\r
+ ctx->zlibIn = NULL;\r
+ ctx->zlibIn = (char*)malloc(blkLen);\r
+ if (!ctx->zlibIn)\r
+ die_horribly("%s: failed to allocate decompression buffer\n", progname);\r
+\r
+ ctx->inSize = blkLen;\r
+ in = ctx->zlibIn;\r
+ }\r
+ cnt = fread(in, 1, blkLen, AH->FH);\r
+ if (cnt != blkLen) \r
+ die_horribly("%s: could not read data block - expected %d, got %d\n", progname, blkLen, cnt);\r
+\r
+ ctx->filePos += blkLen;\r
+\r
+ zp->next_in = in;\r
+ zp->avail_in = blkLen;\r
+\r
+#ifdef HAVE_ZLIB\r
+\r
+ if (AH->compression != 0) {\r
+\r
+ while (zp->avail_in != 0) {\r
+ zp->next_out = out;\r
+ zp->avail_out = zlibOutSize;\r
+ res = inflate(zp, 0);\r
+ if (res != Z_OK && res != Z_STREAM_END)\r
+ die_horribly("%s: unable to uncompress data - %s\n", progname, zp->msg);\r
+\r
+ out[zlibOutSize - zp->avail_out] = '\0';\r
+ ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);\r
+ }\r
+ } else {\r
+#endif\r
+ ahwrite(in, 1, zp->avail_in, AH);\r
+ zp->avail_in = 0;\r
+\r
+#ifdef HAVE_ZLIB\r
+ }\r
+#endif\r
+\r
+ blkLen = ReadInt(AH);\r
+ }\r
+\r
+#ifdef HAVE_ZLIB\r
+ if (AH->compression != 0) \r
+ {\r
+ zp->next_in = NULL;\r
+ zp->avail_in = 0;\r
+ while (res != Z_STREAM_END) {\r
+ zp->next_out = out;\r
+ zp->avail_out = zlibOutSize;\r
+ res = inflate(zp, 0);\r
+ if (res != Z_OK && res != Z_STREAM_END)\r
+ die_horribly("%s: unable to uncompress data - %s\n", progname, zp->msg);\r
+\r
+ out[zlibOutSize - zp->avail_out] = '\0';\r
+ ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);\r
+ }\r
+ }\r
+#endif\r
+\r
+}\r
+\r
+/*\r
+ * Skip data from current file position.\r
+*/\r
+static void _skipData(ArchiveHandle* AH)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ int blkLen;\r
+ char* in = ctx->zlibIn;\r
+ int cnt;\r
+\r
+ blkLen = ReadInt(AH);\r
+ while (blkLen != 0) {\r
+ if (blkLen > ctx->inSize) {\r
+ free(ctx->zlibIn);\r
+ ctx->zlibIn = (char*)malloc(blkLen);\r
+ ctx->inSize = blkLen;\r
+ in = ctx->zlibIn;\r
+ }\r
+ cnt = fread(in, 1, blkLen, AH->FH);\r
+ if (cnt != blkLen) \r
+ die_horribly("%s: could not read data block - expected %d, got %d\n", progname, blkLen, cnt);\r
+\r
+ ctx->filePos += blkLen;\r
+\r
+ blkLen = ReadInt(AH);\r
+ }\r
+\r
+}\r
+\r
+static int _WriteByte(ArchiveHandle* AH, const int i)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ int res;\r
+\r
+ res = fputc(i, AH->FH);\r
+ if (res != EOF) {\r
+ ctx->filePos += 1;\r
+ }\r
+ return res;\r
+}\r
+\r
+static int _ReadByte(ArchiveHandle* AH)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ int res;\r
+\r
+ res = fgetc(AH->FH);\r
+ if (res != EOF) {\r
+ ctx->filePos += 1;\r
+ }\r
+ return res;\r
+}\r
+\r
+static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ int res;\r
+ res = fwrite(buf, 1, len, AH->FH);\r
+ ctx->filePos += res;\r
+ return res;\r
+}\r
+\r
+static int _ReadBuf(ArchiveHandle* AH, void* buf, int len)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ int res;\r
+ res = fread(buf, 1, len, AH->FH);\r
+ ctx->filePos += res;\r
+ return res;\r
+}\r
+\r
+static void _CloseArchive(ArchiveHandle* AH)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ int tpos;\r
+\r
+ if (AH->mode == archModeWrite) {\r
+ WriteHead(AH);\r
+ tpos = ftell(AH->FH);\r
+ WriteToc(AH);\r
+ ctx->dataStart = _getFilePos(AH, ctx);\r
+ WriteDataChunks(AH);\r
+ /* This is not an essential operation - it is really only\r
+ * needed if we expect to be doing seeks to read the data back\r
+ * - it may be ok to just use the existing self-consistent block\r
+ * formatting.\r
+ */\r
+ if (ctx->hasSeek) {\r
+ fseek(AH->FH, tpos, SEEK_SET);\r
+ WriteToc(AH);\r
+ }\r
+ }\r
+\r
+ fclose(AH->FH);\r
+ AH->FH = NULL; \r
+}\r
+\r
+static int _getFilePos(ArchiveHandle* AH, lclContext* ctx) \r
+{\r
+ int pos;\r
+ if (ctx->hasSeek) {\r
+ pos = ftell(AH->FH);\r
+ if (pos != ctx->filePos) {\r
+ fprintf(stderr, "Warning: ftell mismatch with filePos\n");\r
+ }\r
+ } else {\r
+ pos = ctx->filePos;\r
+ }\r
+ return pos;\r
+}\r
+\r
+\r
--- /dev/null
+/*-------------------------------------------------------------------------\r
+ *\r
+ * pg_backup_files.c\r
+ *\r
+ * This file is copied from the 'custom' format file, but dumps data into\r
+ * separate files, and the TOC into the 'main' file.\r
+ *\r
+ * IT IS FOR DEMONSTRATION PURPOSES ONLY.\r
+ *\r
+ * (and could probably be used as a basis for writing a tar file)\r
+ *\r
+ * See the headers to pg_restore for more details.\r
+ *\r
+ * Copyright (c) 2000, Philip Warner\r
+ * Rights are granted to use this software in any way so long\r
+ * as this notice is not removed.\r
+ *\r
+ * The author is not responsible for loss or damages that may\r
+ * result from it's use.\r
+ *\r
+ *\r
+ * IDENTIFICATION\r
+ *\r
+ * Modifications - 28-Jun-2000 - pjw@rhyme.com.au\r
+ *\r
+ * Initial version. \r
+ *\r
+ *-------------------------------------------------------------------------\r
+ */\r
+\r
+#include <stdlib.h>\r
+#include <string.h>\r
+#include "pg_backup.h"\r
+#include "pg_backup_archiver.h"\r
+\r
+static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te);\r
+static void _StartData(ArchiveHandle* AH, TocEntry* te);\r
+static int _WriteData(ArchiveHandle* AH, const void* data, int dLen);\r
+static void _EndData(ArchiveHandle* AH, TocEntry* te);\r
+static int _WriteByte(ArchiveHandle* AH, const int i);\r
+static int _ReadByte(ArchiveHandle* );\r
+static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len);\r
+static int _ReadBuf(ArchiveHandle* AH, void* buf, int len);\r
+static void _CloseArchive(ArchiveHandle* AH);\r
+static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt);\r
+static void _WriteExtraToc(ArchiveHandle* AH, TocEntry* te);\r
+static void _ReadExtraToc(ArchiveHandle* AH, TocEntry* te);\r
+static void _PrintExtraToc(ArchiveHandle* AH, TocEntry* te);\r
+\r
+\r
+typedef struct {\r
+ int hasSeek;\r
+ int filePos;\r
+} lclContext;\r
+\r
+typedef struct {\r
+#ifdef HAVE_ZLIB\r
+ gzFile *FH;\r
+#else\r
+ FILE *FH;\r
+#endif\r
+ char *filename;\r
+} lclTocEntry;\r
+\r
+/*\r
+ * Initializer\r
+ */\r
+void InitArchiveFmt_Files(ArchiveHandle* AH) \r
+{\r
+ lclContext* ctx;\r
+\r
+ /* Assuming static functions, this can be copied for each format. */\r
+ AH->ArchiveEntryPtr = _ArchiveEntry;\r
+ AH->StartDataPtr = _StartData;\r
+ AH->WriteDataPtr = _WriteData;\r
+ AH->EndDataPtr = _EndData;\r
+ AH->WriteBytePtr = _WriteByte;\r
+ AH->ReadBytePtr = _ReadByte;\r
+ AH->WriteBufPtr = _WriteBuf;\r
+ AH->ReadBufPtr = _ReadBuf;\r
+ AH->ClosePtr = _CloseArchive;\r
+ AH->PrintTocDataPtr = _PrintTocData;\r
+ AH->ReadExtraTocPtr = _ReadExtraToc;\r
+ AH->WriteExtraTocPtr = _WriteExtraToc;\r
+ AH->PrintExtraTocPtr = _PrintExtraToc;\r
+\r
+ /*\r
+ * Set up some special context used in compressing data.\r
+ */\r
+ ctx = (lclContext*)malloc(sizeof(lclContext));\r
+ AH->formatData = (void*)ctx;\r
+ ctx->filePos = 0;\r
+\r
+ /*\r
+ * Now open the TOC file\r
+ */\r
+ if (AH->mode == archModeWrite) {\r
+ if (AH->fSpec && strcmp(AH->fSpec,"") != 0) {\r
+ AH->FH = fopen(AH->fSpec, PG_BINARY_W);\r
+ } else {\r
+ AH->FH = stdout;\r
+ }\r
+ ctx->hasSeek = (fseek(AH->FH, 0, SEEK_CUR) == 0);\r
+\r
+ if (AH->compression < 0 || AH->compression > 9) {\r
+ AH->compression = Z_DEFAULT_COMPRESSION;\r
+ }\r
+\r
+\r
+ } else {\r
+ if (AH->fSpec && strcmp(AH->fSpec,"") != 0) {\r
+ AH->FH = fopen(AH->fSpec, PG_BINARY_R);\r
+ } else {\r
+ AH->FH = stdin;\r
+ }\r
+ ctx->hasSeek = (fseek(AH->FH, 0, SEEK_CUR) == 0);\r
+\r
+ ReadHead(AH);\r
+ ReadToc(AH);\r
+ fclose(AH->FH); /* Nothing else in the file... */\r
+ }\r
+\r
+}\r
+\r
+/*\r
+ * - Start a new TOC entry\r
+ * Setup the output file name.\r
+ */\r
+static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te) \r
+{\r
+ lclTocEntry* ctx;\r
+ char fn[1024];\r
+\r
+ ctx = (lclTocEntry*)malloc(sizeof(lclTocEntry));\r
+ if (te->dataDumper) {\r
+#ifdef HAVE_ZLIB\r
+ if (AH->compression == 0) {\r
+ sprintf(fn, "%d.dat", te->id);\r
+ } else {\r
+ sprintf(fn, "%d.dat.gz", te->id);\r
+ }\r
+#else\r
+ sprintf(fn, "%d.dat", te->id);\r
+#endif\r
+ ctx->filename = strdup(fn);\r
+ } else {\r
+ ctx->filename = NULL;\r
+ ctx->FH = NULL;\r
+ }\r
+ te->formatData = (void*)ctx;\r
+}\r
+\r
+static void _WriteExtraToc(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ lclTocEntry* ctx = (lclTocEntry*)te->formatData;\r
+\r
+ if (ctx->filename) {\r
+ WriteStr(AH, ctx->filename);\r
+ } else {\r
+ WriteStr(AH, "");\r
+ }\r
+}\r
+\r
+static void _ReadExtraToc(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ lclTocEntry* ctx = (lclTocEntry*)te->formatData;\r
+\r
+ if (ctx == NULL) {\r
+ ctx = (lclTocEntry*)malloc(sizeof(lclTocEntry));\r
+ te->formatData = (void*)ctx;\r
+ }\r
+\r
+ ctx->filename = ReadStr(AH);\r
+ if (strlen(ctx->filename) == 0) {\r
+ free(ctx->filename);\r
+ ctx->filename = NULL;\r
+ }\r
+ ctx->FH = NULL;\r
+}\r
+\r
+static void _PrintExtraToc(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ lclTocEntry* ctx = (lclTocEntry*)te->formatData;\r
+\r
+ ahprintf(AH, "-- File: %s\n", ctx->filename);\r
+}\r
+\r
+static void _StartData(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ lclTocEntry* tctx = (lclTocEntry*)te->formatData;\r
+ char fmode[10];\r
+\r
+ sprintf(fmode, "wb%d", AH->compression);\r
+\r
+#ifdef HAVE_ZLIB\r
+ tctx->FH = gzopen(tctx->filename, fmode);\r
+#else\r
+ tctx->FH = fopen(tctx->filename, PG_BINARY_W);\r
+#endif\r
+}\r
+\r
+static int _WriteData(ArchiveHandle* AH, const void* data, int dLen)\r
+{\r
+ lclTocEntry* tctx = (lclTocEntry*)AH->currToc->formatData;\r
+\r
+ GZWRITE((void*)data, 1, dLen, tctx->FH);\r
+\r
+ return dLen;\r
+}\r
+\r
+static void _EndData(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ lclTocEntry* tctx = (lclTocEntry*) te->formatData;\r
+\r
+ /* Close the file */\r
+ GZCLOSE(tctx->FH);\r
+ tctx->FH = NULL;\r
+}\r
+\r
+/*\r
+ * Print data for a given TOC entry\r
+*/\r
+static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt)\r
+{\r
+ lclTocEntry* tctx = (lclTocEntry*) te->formatData;\r
+ char buf[4096];\r
+ int cnt;\r
+\r
+ if (!tctx->filename) \r
+ return;\r
+\r
+#ifdef HAVE_ZLIB\r
+ AH->FH = gzopen(tctx->filename,"rb");\r
+#else\r
+ AH->FH = fopen(tctx->filename,PG_BINARY_R);\r
+#endif\r
+\r
+ ahprintf(AH, "--\n-- Data for TOC Entry ID %d (OID %s) %s %s\n--\n\n",\r
+ te->id, te->oid, te->desc, te->name);\r
+\r
+ while ( (cnt = GZREAD(buf, 1, 4096, AH->FH)) > 0) {\r
+ ahwrite(buf, 1, cnt, AH);\r
+ }\r
+\r
+ GZCLOSE(AH->FH);\r
+\r
+ ahprintf(AH, "\n\n");\r
+}\r
+\r
+static int _WriteByte(ArchiveHandle* AH, const int i)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ int res;\r
+\r
+ res = fputc(i, AH->FH);\r
+ if (res != EOF) {\r
+ ctx->filePos += 1;\r
+ }\r
+ return res;\r
+}\r
+\r
+static int _ReadByte(ArchiveHandle* AH)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ int res;\r
+\r
+ res = fgetc(AH->FH);\r
+ if (res != EOF) {\r
+ ctx->filePos += 1;\r
+ }\r
+ return res;\r
+}\r
+\r
+static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ int res;\r
+ res = fwrite(buf, 1, len, AH->FH);\r
+ ctx->filePos += res;\r
+ return res;\r
+}\r
+\r
+static int _ReadBuf(ArchiveHandle* AH, void* buf, int len)\r
+{\r
+ lclContext* ctx = (lclContext*)AH->formatData;\r
+ int res;\r
+ res = fread(buf, 1, len, AH->FH);\r
+ ctx->filePos += res;\r
+ return res;\r
+}\r
+\r
+static void _CloseArchive(ArchiveHandle* AH)\r
+{\r
+ if (AH->mode == archModeWrite) {\r
+ WriteHead(AH);\r
+ WriteToc(AH);\r
+ fclose(AH->FH);\r
+ WriteDataChunks(AH);\r
+ }\r
+\r
+ AH->FH = NULL; \r
+}\r
+\r
--- /dev/null
+/*-------------------------------------------------------------------------\r
+ *\r
+ * pg_backup_plain_text.c\r
+ *\r
+ * This file is copied from the 'custom' format file, but dumps data into\r
+ * directly to a text file, and the TOC into the 'main' file.\r
+ *\r
+ * See the headers to pg_restore for more details.\r
+ *\r
+ * Copyright (c) 2000, Philip Warner\r
+ * Rights are granted to use this software in any way so long\r
+ * as this notice is not removed.\r
+ *\r
+ * The author is not responsible for loss or damages that may\r
+ * result from it's use.\r
+ *\r
+ *\r
+ * IDENTIFICATION\r
+ *\r
+ * Modifications - 01-Jul-2000 - pjw@rhyme.com.au\r
+ *\r
+ * Initial version. \r
+ *\r
+ *-------------------------------------------------------------------------\r
+ */\r
+\r
+#include <stdlib.h>\r
+#include <string.h>\r
+#include <unistd.h> /* for dup */\r
+#include "pg_backup.h"\r
+#include "pg_backup_archiver.h"\r
+\r
+static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te);\r
+static void _StartData(ArchiveHandle* AH, TocEntry* te);\r
+static int _WriteData(ArchiveHandle* AH, const void* data, int dLen);\r
+static void _EndData(ArchiveHandle* AH, TocEntry* te);\r
+static int _WriteByte(ArchiveHandle* AH, const int i);\r
+static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len);\r
+static void _CloseArchive(ArchiveHandle* AH);\r
+static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt);\r
+\r
+/*\r
+ * Initializer\r
+ */\r
+void InitArchiveFmt_PlainText(ArchiveHandle* AH) \r
+{\r
+ /* Assuming static functions, this can be copied for each format. */\r
+ AH->ArchiveEntryPtr = _ArchiveEntry;\r
+ AH->StartDataPtr = _StartData;\r
+ AH->WriteDataPtr = _WriteData;\r
+ AH->EndDataPtr = _EndData;\r
+ AH->WriteBytePtr = _WriteByte;\r
+ AH->WriteBufPtr = _WriteBuf;\r
+ AH->ClosePtr = _CloseArchive;\r
+ AH->PrintTocDataPtr = _PrintTocData;\r
+\r
+ /*\r
+ * Now prevent reading...\r
+ */\r
+ if (AH->mode == archModeRead)\r
+ die_horribly("%s: This format can not be read\n");\r
+\r
+}\r
+\r
+/*\r
+ * - Start a new TOC entry\r
+ */\r
+static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te) \r
+{\r
+ /* Don't need to do anything */\r
+}\r
+\r
+static void _StartData(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ ahprintf(AH, "--\n-- Data for TOC Entry ID %d (OID %s) %s %s\n--\n\n",\r
+ te->id, te->oid, te->desc, te->name);\r
+}\r
+\r
+static int _WriteData(ArchiveHandle* AH, const void* data, int dLen)\r
+{\r
+ ahwrite(data, 1, dLen, AH);\r
+ return dLen;\r
+}\r
+\r
+static void _EndData(ArchiveHandle* AH, TocEntry* te)\r
+{\r
+ ahprintf(AH, "\n\n");\r
+}\r
+\r
+/*\r
+ * Print data for a given TOC entry\r
+*/\r
+static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt)\r
+{\r
+ if (*te->dataDumper)\r
+ (*te->dataDumper)((Archive*)AH, te->oid, te->dataDumperArg);\r
+}\r
+\r
+static int _WriteByte(ArchiveHandle* AH, const int i)\r
+{\r
+ /* Don't do anything */\r
+ return 0;\r
+}\r
+\r
+static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len)\r
+{\r
+ /* Don't do anything */\r
+ return len;\r
+}\r
+\r
+static void _CloseArchive(ArchiveHandle* AH)\r
+{\r
+ /* Nothing to do */\r
+}\r
+\r
*
*
* IDENTIFICATION
- * $Header: /cvsroot/pgsql/src/bin/pg_dump/pg_dump.c,v 1.153 2000/07/02 15:21:05 petere Exp $
+ * $Header: /cvsroot/pgsql/src/bin/pg_dump/pg_dump.c,v 1.154 2000/07/04 14:25:28 momjian Exp $
*
* Modifications - 6/10/96 - dave@bensoft.com - version 1.13.dhb
*
*
* Modifications - 1/26/98 - pjlobo@euitt.upm.es
* - Added support for password authentication
- *-------------------------------------------------------------------------
+ *
+ * Modifications - 28-Jun-2000 - Philip Warner pjw@rhyme.com.au
+ * - Used custom IO routines to allow for more
+ * output formats and simple rearrangement of order.
+ * - Discouraged operations more appropriate to the 'restore'
+ * operation. (eg. -c "clear schema" - now always dumps
+ * commands, but pg_restore can be told not to output them).
+ * - Added RI warnings to the 'as insert strings' output mode
+ * - Added a small number of comments
+ * - Added a -Z option for compression level on compressed formats
+ * - Restored '-f' in usage output
+ *
+*-------------------------------------------------------------------------
*/
#include <unistd.h> /* for getopt() */
#endif
#include "pg_dump.h"
+#include "pg_backup.h"
-static void dumpComment(FILE *outfile, const char *target, const char *oid);
-static void dumpSequence(FILE *fout, TableInfo tbinfo);
-static void dumpACL(FILE *fout, TableInfo tbinfo);
-static void dumpTriggers(FILE *fout, const char *tablename,
+static void dumpComment(Archive *outfile, const char *target, const char *oid);
+static void dumpSequence(Archive *fout, TableInfo tbinfo);
+static void dumpACL(Archive *fout, TableInfo tbinfo);
+static void dumpTriggers(Archive *fout, const char *tablename,
TableInfo *tblinfo, int numTables);
-static void dumpRules(FILE *fout, const char *tablename,
+static void dumpRules(Archive *fout, const char *tablename,
TableInfo *tblinfo, int numTables);
static char *checkForQuote(const char *s);
static void clearTableInfo(TableInfo *, int);
-static void dumpOneFunc(FILE *fout, FuncInfo *finfo, int i,
+static void dumpOneFunc(Archive *fout, FuncInfo *finfo, int i,
TypeInfo *tinfo, int numTypes);
static int findLastBuiltinOid(void);
static bool isViewRule(char *relname);
-static void setMaxOid(FILE *fout);
+static void setMaxOid(Archive *fout);
static void AddAcl(char *aclbuf, const char *keyword);
static char *GetPrivileges(const char *s);
-static void becomeUser(FILE *fout, const char *username);
extern char *optarg;
extern int optind,
bool g_verbose; /* User wants verbose narration of our
* activities. */
int g_last_builtin_oid; /* value of the last builtin oid */
-FILE *g_fout; /* the script file */
+Archive *g_fout; /* the script file */
PGconn *g_conn; /* the database connection */
bool force_quotes; /* User wants to suppress double-quotes */
bool schemaOnly;
bool dataOnly;
bool aclsSkip;
-bool dropSchema;
char g_opaque_type[10]; /* name for the opaque type */
char g_comment_end[10];
+typedef struct _dumpContext {
+ TableInfo *tblinfo;
+ int tblidx;
+ bool oids;
+} DumpContext;
+
static void
help(const char *progname)
{
#ifdef HAVE_GETOPT_LONG
puts(
- " -a, --data-only dump out only the data, not the schema\n"
- " -c, --clean clean (drop) schema prior to create\n"
- " -d, --inserts dump data as INSERT, rather than COPY, commands\n"
- " -D, --attribute-inserts dump data as INSERT commands with attribute names\n"
- " -h, --host <hostname> server host name\n"
- " -i, --ignore-version proceed when database version != pg_dump version\n"
- " -n, --no-quotes suppress most quotes around identifiers\n"
- " -N, --quotes enable most quotes around identifiers\n"
- " -o, --oids dump object ids (oids)\n"
- " -p, --port <port> server port number\n"
- " -s, --schema-only dump out only the schema, no data\n"
- " -t, --table <table> dump for this table only\n"
- " -u, --password use password authentication\n"
- " -v, --verbose verbose\n"
- " -x, --no-acl do not dump ACL's (grant/revoke)\n"
+ " -a, --data-only dump out only the data, not the schema\n"
+ " -c, --clean clean (drop) schema prior to create\n"
+ " -d, --inserts dump data as INSERT, rather than COPY, commands\n"
+ " -D, --attribute-inserts dump data as INSERT commands with attribute names\n"
+ " -f, --file specify output file name\n"
+ " -F, --format {c|f|p} output file format (custom, files, plain text)\n"
+ " -h, --host <hostname> server host name\n"
+ " -i, --ignore-version proceed when database version != pg_dump version\n"
+ " -n, --no-quotes suppress most quotes around identifiers\n"
+ " -N, --quotes enable most quotes around identifiers\n"
+ " -o, --oids dump object ids (oids)\n"
+ " -p, --port <port> server port number\n"
+ " -s, --schema-only dump out only the schema, no data\n"
+ " -t, --table <table> dump for this table only\n"
+ " -u, --password use password authentication\n"
+ " -v, --verbose verbose\n"
+ " -x, --no-acl do not dump ACL's (grant/revoke)\n"
+ " -Z, --compress {0-9} compression level for compressed formats\n"
);
#else
puts(
- " -a dump out only the data, no schema\n"
- " -c clean (drop) schema prior to create\n"
- " -d dump data as INSERT, rather than COPY, commands\n"
- " -D dump data as INSERT commands with attribute names\n"
- " -h <hostname> server host name\n"
- " -i proceed when database version != pg_dump version\n"
- " -n suppress most quotes around identifiers\n"
- " -N enable most quotes around identifiers\n"
- " -o dump object ids (oids)\n"
- " -p <port> server port number\n"
- " -s dump out only the schema, no data\n"
- " -t <table> dump for this table only\n"
- " -u use password authentication\n"
- " -v verbose\n"
- " -x do not dump ACL's (grant/revoke)\n"
+ " -a dump out only the data, no schema\n"
+ " -c clean (drop) schema prior to create\n"
+ " -d dump data as INSERT, rather than COPY, commands\n"
+ " -D dump data as INSERT commands with attribute names\n"
+ " -f specify output file name\n"
+ " -F {c|f|p} output file format (custom, files, plain text)\n"
+ " -h <hostname> server host name\n"
+ " -i proceed when database version != pg_dump version\n"
+ " -n suppress most quotes around identifiers\n"
+ " -N enable most quotes around identifiers\n"
+ " -o dump object ids (oids)\n"
+ " -p <port> server port number\n"
+ " -s dump out only the schema, no data\n"
+ " -t <table> dump for this table only\n"
+ " -u use password authentication\n"
+ " -v verbose\n"
+ " -x do not dump ACL's (grant/revoke)\n"
+ " -Z {0-9} compression level for compressed formats\n"
);
#endif
puts("If no database name is not supplied, then the PGDATABASE environment\nvariable value is used.\n");
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "isViewRule(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "isViewRule(): SELECT failed. Explanation from backend: '%s'.\n",
+ PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
#define COPYBUFSIZ 8192
-
-static void
-dumpClasses_nodumpData(FILE *fout, const char *classname, const bool oids)
+/*
+ * Dump a table's contents for loading using the COPY command
+ * - this routine is called by the Archiver when it wants the table
+ * to be dumped.
+ */
+static int
+dumpClasses_nodumpData(Archive *fout, char* oid, void *dctxv)
{
+ const DumpContext *dctx = (DumpContext*)dctxv;
+ const char *classname = dctx->tblinfo[dctx->tblidx].relname;
+ const bool oids = dctx->oids;
PGresult *res;
char query[255];
if (oids == true)
{
- fprintf(fout, "COPY %s WITH OIDS FROM stdin;\n",
+ archprintf(fout, "COPY %s WITH OIDS FROM stdin;\n",
fmtId(classname, force_quotes));
sprintf(query, "COPY %s WITH OIDS TO stdout;\n",
fmtId(classname, force_quotes));
}
else
{
- fprintf(fout, "COPY %s FROM stdin;\n", fmtId(classname, force_quotes));
+ archprintf(fout, "COPY %s FROM stdin;\n", fmtId(classname, force_quotes));
sprintf(query, "COPY %s TO stdout;\n", fmtId(classname, force_quotes));
}
res = PQexec(g_conn, query);
}
else
{
- fputs(copybuf, fout);
+ archputs(copybuf, fout);
switch (ret)
{
case EOF:
copydone = true;
/* FALLTHROUGH */
case 0:
- fputc('\n', fout);
+ archputc('\n', fout);
break;
case 1:
break;
}
}
}
- fprintf(fout, "\\.\n");
+ archprintf(fout, "\\.\n");
}
ret = PQendcopy(g_conn);
if (ret != 0)
exit_nicely(g_conn);
}
}
+ return 1;
}
-static void
-dumpClasses_dumpData(FILE *fout, const char *classname)
+static int
+dumpClasses_dumpData(Archive *fout, char* oid, void *dctxv)
{
+ const DumpContext *dctx = (DumpContext*)dctxv;
+ const char *classname = dctx->tblinfo[dctx->tblidx].relname;
+
PGresult *res;
PQExpBuffer q = createPQExpBuffer();
int tuple;
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "dumpClasses(): command failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "dumpClasses(): command failed. Explanation from backend: '%s'.\n",
+ PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
for (tuple = 0; tuple < PQntuples(res); tuple++)
{
- fprintf(fout, "INSERT INTO %s ", fmtId(classname, force_quotes));
+ archprintf(fout, "INSERT INTO %s ", fmtId(classname, force_quotes));
if (attrNames == true)
{
resetPQExpBuffer(q);
appendPQExpBuffer(q, fmtId(PQfname(res, field), force_quotes));
}
appendPQExpBuffer(q, ") ");
- fprintf(fout, "%s", q->data);
+ archprintf(fout, "%s", q->data);
}
- fprintf(fout, "VALUES (");
+ archprintf(fout, "VALUES (");
for (field = 0; field < PQnfields(res); field++)
{
if (field > 0)
- fprintf(fout, ",");
+ archprintf(fout, ",");
if (PQgetisnull(res, tuple, field))
{
- fprintf(fout, "NULL");
+ archprintf(fout, "NULL");
continue;
}
switch (PQftype(res, field))
case FLOAT4OID:
case FLOAT8OID:/* float types */
/* These types are printed without quotes */
- fprintf(fout, "%s",
+ archprintf(fout, "%s",
PQgetvalue(res, tuple, field));
break;
default:
* Quote mark ' goes to '' per SQL standard, other
* stuff goes to \ sequences.
*/
- putc('\'', fout);
+ archputc('\'', fout);
expsrc = PQgetvalue(res, tuple, field);
while (*expsrc)
{
if (ch == '\\' || ch == '\'')
{
- putc(ch, fout); /* double these */
- putc(ch, fout);
+ archputc(ch, fout); /* double these */
+ archputc(ch, fout);
}
else if (ch < '\040')
{
/* generate octal escape for control chars */
- putc('\\', fout);
- putc(((ch >> 6) & 3) + '0', fout);
- putc(((ch >> 3) & 7) + '0', fout);
- putc((ch & 7) + '0', fout);
+ archputc('\\', fout);
+ archputc(((ch >> 6) & 3) + '0', fout);
+ archputc(((ch >> 3) & 7) + '0', fout);
+ archputc((ch & 7) + '0', fout);
}
else
- putc(ch, fout);
+ archputc(ch, fout);
}
- putc('\'', fout);
+ archputc('\'', fout);
break;
}
}
- fprintf(fout, ");\n");
+ archprintf(fout, ");\n");
}
PQclear(res);
+ return 1;
}
-
-
/*
* DumpClasses -
* dump the contents of all the classes.
*/
static void
-dumpClasses(const TableInfo *tblinfo, const int numTables, FILE *fout,
+dumpClasses(const TableInfo *tblinfo, const int numTables, Archive *fout,
const char *onlytable, const bool oids)
{
- int i;
- char *all_only;
+ int i;
+ char *all_only;
+ DataDumperPtr dumpFn;
+ DumpContext *dumpCtx;
if (onlytable == NULL)
all_only = "all";
if (g_verbose)
fprintf(stderr, "%s dumping out schema of sequence '%s' %s\n",
g_comment_start, tblinfo[i].relname, g_comment_end);
- becomeUser(fout, tblinfo[i].usename);
+ /* becomeUser(fout, tblinfo[i].usename); */
dumpSequence(fout, tblinfo[i]);
}
}
fprintf(stderr, "%s dumping out the contents of Table '%s' %s\n",
g_comment_start, classname, g_comment_end);
- becomeUser(fout, tblinfo[i].usename);
+ /* becomeUser(fout, tblinfo[i].usename); */
+
+ dumpCtx = (DumpContext*)malloc(sizeof(DumpContext));
+ dumpCtx->tblinfo = (TableInfo*)tblinfo;
+ dumpCtx->tblidx = i;
+ dumpCtx->oids = oids;
if (!dumpData)
- dumpClasses_nodumpData(fout, classname, oids);
+ dumpFn = dumpClasses_nodumpData;
+ /* dumpClasses_nodumpData(fout, classname, oids); */
else
- dumpClasses_dumpData(fout, classname);
+ dumpFn = dumpClasses_dumpData;
+ /* dumpClasses_dumpData(fout, classname); */
+
+ ArchiveEntry(fout, tblinfo[i].oid, fmtId(tblinfo[i].relname, false),
+ "TABLE DATA", NULL, "", "", tblinfo[i].usename,
+ dumpFn, dumpCtx);
}
}
}
-
static void
prompt_for_password(char *username, char *password)
{
int c;
const char *progname;
const char *filename = NULL;
+ const char *format = "p";
const char *dbname = NULL;
const char *pghost = NULL;
const char *pgport = NULL;
char username[100];
char password[100];
bool use_password = false;
+ int compressLevel = -1;
bool ignore_version = false;
+ int plainText = 0;
+ int outputClean = 0;
+ RestoreOptions *ropt;
#ifdef HAVE_GETOPT_LONG
static struct option long_options[] = {
{"data-only", no_argument, NULL, 'a'},
{"clean", no_argument, NULL, 'c'},
+ {"file", required_argument, NULL, 'f'},
+ {"format", required_argument, NULL, 'F'},
{"inserts", no_argument, NULL, 'd'},
{"attribute-inserts", no_argument, NULL, 'D'},
{"host", required_argument, NULL, 'h'},
{"password", no_argument, NULL, 'u'},
{"verbose", no_argument, NULL, 'v'},
{"no-acl", no_argument, NULL, 'x'},
+ {"compress", required_argument, NULL, 'Z'},
{"help", no_argument, NULL, '?'},
{"version", no_argument, NULL, 'V'}
};
g_verbose = false;
force_quotes = true;
- dropSchema = false;
strcpy(g_comment_start, "-- ");
g_comment_end[0] = '\0';
#ifdef HAVE_GETOPT_LONG
- while ((c = getopt_long(argc, argv, "acdDf:h:inNop:st:uvxzV?", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acdDf:F:h:inNop:st:uvxzZ:V?", long_options, &optindex)) != -1)
#else
- while ((c = getopt(argc, argv, "acdDf:h:inNop:st:uvxzV?-")) != -1)
+ while ((c = getopt(argc, argv, "acdDf:F:h:inNop:st:uvxzZ:V?-")) != -1)
#endif
{
switch (c)
dataOnly = true;
break;
case 'c': /* clean (i.e., drop) schema prior to
- * create */
- dropSchema = true;
- break;
+ * create */
+ outputClean = 1;
+ break;
+
case 'd': /* dump data as proper insert strings */
dumpData = true;
break;
case 'f':
filename = optarg;
break;
+ case 'F':
+ format = optarg;
+ break;
case 'h': /* server host */
pghost = optarg;
break;
case 'x': /* skip ACL dump */
aclsSkip = true;
break;
+ case 'Z': /* Compression Level */
+ compressLevel = atoi(optarg);
+ break;
case 'V':
version();
exit(0);
}
}
+ if (dataOnly && schemaOnly)
+ {
+ fprintf(stderr,
+ "%s: 'Schema Only' and 'Data Only' are incompatible options.\n",
+ progname);
+ exit(1);
+ }
+
if (dumpData == true && oids == true)
{
fprintf(stderr,
}
/* open the output file */
- if (filename == NULL)
- g_fout = stdout;
- else
- {
- g_fout = fopen(filename, PG_BINARY_W);
- if (g_fout == NULL)
- {
+ switch (format[0]) {
+
+ case 'c':
+ case 'C':
+ g_fout = CreateArchive(filename, archCustom, compressLevel);
+ break;
+
+ case 'f':
+ case 'F':
+ g_fout = CreateArchive(filename, archFiles, compressLevel);
+ break;
+
+ case 'p':
+ case 'P':
+ plainText = 1;
+ g_fout = CreateArchive(filename, archPlainText, 0);
+ break;
+
+ default:
fprintf(stderr,
- "%s: could not open output file named %s for writing\n",
- progname, filename);
- exit(1);
- }
+ "%s: invalid output format '%s' specified\n", progname, format);
+ exit(1);
+ }
+
+ if (g_fout == NULL)
+ {
+ fprintf(stderr,
+ "%s: could not open output file named %s for writing\n",
+ progname, filename);
+ exit(1);
}
/* find database */
if (oids == true)
setMaxOid(g_fout);
- if (!dataOnly)
- {
- if (g_verbose)
+
+ if (g_verbose)
fprintf(stderr, "%s last builtin oid is %u %s\n",
g_comment_start, g_last_builtin_oid, g_comment_end);
- tblinfo = dumpSchema(g_fout, &numTables, tablename, aclsSkip);
- }
- else
- tblinfo = dumpSchema(NULL, &numTables, tablename, aclsSkip);
+ tblinfo = dumpSchema(g_fout, &numTables, tablename, aclsSkip, oids, schemaOnly, dataOnly);
if (!schemaOnly)
- {
- if (dataOnly)
- fprintf(g_fout, "UPDATE \"pg_class\" SET \"reltriggers\" = 0 WHERE \"relname\" !~ '^pg_';\n");
-
- dumpClasses(tblinfo, numTables, g_fout, tablename, oids);
-
- if (dataOnly)
- {
- fprintf(g_fout, "BEGIN TRANSACTION;\n");
- fprintf(g_fout, "CREATE TEMP TABLE \"tr\" (\"tmp_relname\" name, \"tmp_reltriggers\" smallint);\n");
- fprintf(g_fout, "INSERT INTO \"tr\" SELECT C.\"relname\", count(T.\"oid\") FROM \"pg_class\" C, \"pg_trigger\" T WHERE C.\"oid\" = T.\"tgrelid\" AND C.\"relname\" !~ '^pg_' GROUP BY 1;\n");
- fprintf(g_fout, "UPDATE \"pg_class\" SET \"reltriggers\" = TMP.\"tmp_reltriggers\" FROM \"tr\" TMP WHERE \"pg_class\".\"relname\" = TMP.\"tmp_relname\";\n");
- fprintf(g_fout, "COMMIT TRANSACTION;\n");
- }
- }
+ dumpClasses(tblinfo, numTables, g_fout, tablename, oids);
if (!dataOnly) /* dump indexes and triggers at the end
* for performance */
dumpRules(g_fout, tablename, tblinfo, numTables);
}
- fflush(g_fout);
- if (g_fout != stdout)
- fclose(g_fout);
+ if (plainText)
+ {
+ ropt = NewRestoreOptions();
+ ropt->filename = (char*)filename;
+ ropt->dropSchema = outputClean;
+ ropt->aclsSkip = aclsSkip;
+
+ if (compressLevel == -1)
+ ropt->compression = 0;
+ else
+ ropt->compression = compressLevel;
+
+ RestoreArchive(g_fout, ropt);
+ }
+
+ CloseArchive(g_fout);
clearTableInfo(tblinfo, numTables);
PQfinish(g_conn);
if (tblinfo[i].typnames[j])
free(tblinfo[i].typnames[j]);
}
+
+ if (tblinfo[i].triggers) {
+ for (j = 0; j < tblinfo[i].ntrig ; j++)
+ {
+ if (tblinfo[i].triggers[j].tgsrc)
+ free(tblinfo[i].triggers[j].tgsrc);
+ if (tblinfo[i].triggers[j].oid)
+ free(tblinfo[i].triggers[j].oid);
+ if (tblinfo[i].triggers[j].tgname)
+ free(tblinfo[i].triggers[j].tgname);
+ if (tblinfo[i].triggers[j].tgdel)
+ free(tblinfo[i].triggers[j].tgdel);
+ }
+ free(tblinfo[i].triggers);
+ }
+
if (tblinfo[i].atttypmod)
free((int *) tblinfo[i].atttypmod);
if (tblinfo[i].inhAttrs)
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "getAggregates(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "getAggregates(): SELECT failed. Explanation from backend: '%s'.\n",
+ PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "getFuncs(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "getFuncs(): SELECT failed. Explanation from backend: '%s'.\n",
+ PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
int ntups;
int i;
PQExpBuffer query = createPQExpBuffer();
+ PQExpBuffer delqry = createPQExpBuffer();
TableInfo *tblinfo;
- int i_oid;
+ int i_reloid;
int i_relname;
int i_relkind;
int i_relacl;
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "getTables(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "getTables(): SELECT failed. Explanation from backend: '%s'.\n",
+ PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
tblinfo = (TableInfo *) malloc(ntups * sizeof(TableInfo));
- i_oid = PQfnumber(res, "oid");
+ i_reloid = PQfnumber(res, "oid");
i_relname = PQfnumber(res, "relname");
i_relkind = PQfnumber(res, "relkind");
i_relacl = PQfnumber(res, "relacl");
for (i = 0; i < ntups; i++)
{
- tblinfo[i].oid = strdup(PQgetvalue(res, i, i_oid));
+ tblinfo[i].oid = strdup(PQgetvalue(res, i, i_reloid));
tblinfo[i].relname = strdup(PQgetvalue(res, i, i_relname));
tblinfo[i].relacl = strdup(PQgetvalue(res, i, i_relacl));
tblinfo[i].sequence = (strcmp(PQgetvalue(res, i, i_relkind), "S") == 0);
if (!res2 ||
PQresultStatus(res2) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "getTables(): SELECT (for inherited CHECK) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "getTables(): SELECT (for inherited CHECK) failed. "
+ "Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
ntups2 = PQntuples(res2);
if (!res2 ||
PQresultStatus(res2) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "getTables(): SELECT (for CHECK) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "getTables(): SELECT (for CHECK) failed. "
+ "Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
ntups2 = PQntuples(res2);
if (!res2 ||
PQresultStatus(res2) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "getTables(): SELECT (for TRIGGER) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "getTables(): SELECT (for TRIGGER) failed. "
+ "Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
ntups2 = PQntuples(res2);
i_tgdeferrable = PQfnumber(res2, "tgdeferrable");
i_tginitdeferred = PQfnumber(res2, "tginitdeferred");
- tblinfo[i].triggers = (char **) malloc(ntups2 * sizeof(char *));
- tblinfo[i].trcomments = (char **) malloc(ntups2 * sizeof(char *));
- tblinfo[i].troids = (char **) malloc(ntups2 * sizeof(char *));
+ tblinfo[i].triggers = (TrigInfo*) malloc(ntups2 * sizeof(TrigInfo));
resetPQExpBuffer(query);
for (i2 = 0; i2 < ntups2; i2++)
{
}
else
tgfunc = strdup(finfo[findx].proname);
-#if 0
- /* XXX - how to emit this DROP TRIGGER? */
- if (dropSchema)
- {
- resetPQExpBuffer(query);
- appendPQExpBuffer(query, "DROP TRIGGER %s ",
+
+ appendPQExpBuffer(delqry, "DROP TRIGGER %s ",
fmtId(PQgetvalue(res2, i2, i_tgname),
- force_quotes));
- appendPQExpBuffer(query, "ON %s;\n",
- fmtId(tblinfo[i].relname, force_quotes));
- fputs(query->data, fout);
- }
-#endif
+ force_quotes));
+ appendPQExpBuffer(delqry, "ON %s;\n",
+ fmtId(tblinfo[i].relname, force_quotes));
resetPQExpBuffer(query);
if (tgisconstraint)
p = strchr(p, '\\');
if (p == NULL)
{
- fprintf(stderr, "getTables(): relation '%s': bad argument string (%s) for trigger '%s'\n",
+ fprintf(stderr, "getTables(): relation '%s': bad argument "
+ "string (%s) for trigger '%s'\n",
tblinfo[i].relname,
PQgetvalue(res2, i2, i_tgargs),
PQgetvalue(res2, i2, i_tgname));
}
appendPQExpBuffer(query, ");\n");
- tblinfo[i].triggers[i2] = strdup(query->data);
+ tblinfo[i].triggers[i2].tgsrc = strdup(query->data);
/*** Initialize trcomments and troids ***/
fmtId(PQgetvalue(res2, i2, i_tgname), force_quotes));
appendPQExpBuffer(query, "ON %s",
fmtId(tblinfo[i].relname, force_quotes));
- tblinfo[i].trcomments[i2] = strdup(query->data);
- tblinfo[i].troids[i2] = strdup(PQgetvalue(res2, i2, i_tgoid));
+ tblinfo[i].triggers[i2].tgcomment = strdup(query->data);
+ tblinfo[i].triggers[i2].oid = strdup(PQgetvalue(res2, i2, i_tgoid));
+ tblinfo[i].triggers[i2].tgname = strdup(fmtId(PQgetvalue(res2, i2, i_tgname),false));
+ tblinfo[i].triggers[i2].tgdel = strdup(delqry->data);
if (tgfunc)
free(tgfunc);
else
{
tblinfo[i].triggers = NULL;
- tblinfo[i].trcomments = NULL;
- tblinfo[i].troids = NULL;
}
}
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "getInherits(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "getInherits(): SELECT failed. Explanation from backend: '%s'.\n",
+ PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "getTableAttrs(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "getTableAttrs(): SELECT failed. "
+ "Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
if (!res2 ||
PQresultStatus(res2) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "getTableAttrs(): SELECT (for DEFAULT) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "getTableAttrs(): SELECT (for DEFAULT) failed. "
+ "Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
tblinfo[i].adef_expr[j] = strdup(PQgetvalue(res2, 0, PQfnumber(res2, "adsrc")));
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "getIndices(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "getIndices(): SELECT failed. "
+ "Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
*/
static void
-dumpComment(FILE *fout, const char *target, const char *oid)
+dumpComment(Archive *fout, const char *target, const char *oid)
{
PGresult *res;
if (PQntuples(res) != 0)
{
i_description = PQfnumber(res, "description");
- fprintf(fout, "COMMENT ON %s IS '%s';\n",
- target, checkForQuote(PQgetvalue(res, 0, i_description)));
+ resetPQExpBuffer(query);
+ appendPQExpBuffer(query, "COMMENT ON %s IS '%s';\n",
+ target, checkForQuote(PQgetvalue(res, 0, i_description)));
+
+ ArchiveEntry(fout, oid, target, "COMMENT", NULL, query->data, "" /*Del*/,
+ "" /*Owner*/, NULL, NULL);
+
}
/*** Clear the statement buffer and return ***/
*/
void
-dumpDBComment(FILE *fout)
+dumpDBComment(Archive *fout)
{
PGresult *res;
*
*/
void
-dumpTypes(FILE *fout, FuncInfo *finfo, int numFuncs,
+dumpTypes(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes)
{
int i;
PQExpBuffer q = createPQExpBuffer();
+ PQExpBuffer delq = createPQExpBuffer();
int funcInd;
for (i = 0; i < numTypes; i++)
if (funcInd != -1)
dumpOneFunc(fout, finfo, funcInd, tinfo, numTypes);
- becomeUser(fout, tinfo[i].usename);
-
- if (dropSchema)
- {
- resetPQExpBuffer(q);
- appendPQExpBuffer(q, "DROP TYPE %s;\n", fmtId(tinfo[i].typname, force_quotes));
- fputs(q->data, fout);
- }
+ appendPQExpBuffer(delq, "DROP TYPE %s;\n", fmtId(tinfo[i].typname, force_quotes));
resetPQExpBuffer(q);
appendPQExpBuffer(q,
else
appendPQExpBuffer(q, ");\n");
- fputs(q->data, fout);
+ ArchiveEntry(fout, tinfo[i].oid, fmtId(tinfo[i].typname, force_quotes), "TYPE", NULL,
+ q->data, delq->data, tinfo[i].usename, NULL, NULL);
/*** Dump Type Comments ***/
resetPQExpBuffer(q);
+ resetPQExpBuffer(delq);
+
appendPQExpBuffer(q, "TYPE %s", fmtId(tinfo[i].typname, force_quotes));
dumpComment(fout, q->data, tinfo[i].oid);
+ resetPQExpBuffer(q);
}
}
*
*/
void
-dumpProcLangs(FILE *fout, FuncInfo *finfo, int numFuncs,
+dumpProcLangs(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes)
{
PGresult *res;
PQExpBuffer query = createPQExpBuffer();
+ PQExpBuffer defqry = createPQExpBuffer();
+ PQExpBuffer delqry = createPQExpBuffer();
int ntups;
+ int i_oid;
int i_lanname;
int i_lanpltrusted;
int i_lanplcallfoid;
int i,
fidx;
- appendPQExpBuffer(query, "SELECT * FROM pg_language "
+ appendPQExpBuffer(query, "SELECT oid, * FROM pg_language "
"WHERE lanispl "
"ORDER BY oid");
res = PQexec(g_conn, query->data);
i_lanpltrusted = PQfnumber(res, "lanpltrusted");
i_lanplcallfoid = PQfnumber(res, "lanplcallfoid");
i_lancompiler = PQfnumber(res, "lancompiler");
+ i_oid = PQfnumber(res, "oid");
for (i = 0; i < ntups; i++)
{
}
if (fidx >= numFuncs)
{
- fprintf(stderr, "dumpProcLangs(): handler procedure for language %s not found\n", PQgetvalue(res, i, i_lanname));
+ fprintf(stderr, "dumpProcLangs(): handler procedure for "
+ "language %s not found\n", PQgetvalue(res, i, i_lanname));
exit_nicely(g_conn);
}
lanname = checkForQuote(PQgetvalue(res, i, i_lanname));
lancompiler = checkForQuote(PQgetvalue(res, i, i_lancompiler));
- if (dropSchema)
- fprintf(fout, "DROP PROCEDURAL LANGUAGE '%s';\n", lanname);
+ appendPQExpBuffer(delqry, "DROP PROCEDURAL LANGUAGE '%s';\n", lanname);
- fprintf(fout, "CREATE %sPROCEDURAL LANGUAGE '%s' "
+ appendPQExpBuffer(defqry, "CREATE %sPROCEDURAL LANGUAGE '%s' "
"HANDLER %s LANCOMPILER '%s';\n",
- (PQgetvalue(res, i, i_lanpltrusted)[0] == 't') ? "TRUSTED " : "",
+ (PQgetvalue(res, i, i_lanpltrusted)[0] == 't') ? "TRUSTED " : "",
lanname,
fmtId(finfo[fidx].proname, force_quotes),
lancompiler);
+ ArchiveEntry(fout, PQgetvalue(res, i, i_oid), lanname, "PROCEDURAL LANGUAGE",
+ NULL, defqry->data, delqry->data, "", NULL, NULL);
+
free(lanname);
free(lancompiler);
}
*
*/
void
-dumpFuncs(FILE *fout, FuncInfo *finfo, int numFuncs,
+dumpFuncs(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes)
{
int i;
*/
static void
-dumpOneFunc(FILE *fout, FuncInfo *finfo, int i,
+dumpOneFunc(Archive *fout, FuncInfo *finfo, int i,
TypeInfo *tinfo, int numTypes)
{
PQExpBuffer q = createPQExpBuffer();
+ PQExpBuffer fn = createPQExpBuffer();
+ PQExpBuffer delqry = createPQExpBuffer();
PQExpBuffer fnlist = createPQExpBuffer();
int j;
char *func_def;
else
finfo[i].dumped = 1;
- becomeUser(fout, finfo[i].usename);
+ /* becomeUser(fout, finfo[i].usename); */
sprintf(query, "SELECT lanname FROM pg_language WHERE oid = %u",
finfo[i].lang);
res = PQexec(g_conn, query);
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
- {
+ {
fprintf(stderr, "dumpOneFunc(): SELECT for procedural language failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
- }
+ }
nlangs = PQntuples(res);
if (nlangs != 1)
- {
+ {
fprintf(stderr, "dumpOneFunc(): procedural language for function %s not found\n", finfo[i].proname);
exit_nicely(g_conn);
- }
-
+ }
+
i_lanname = PQfnumber(res, "lanname");
-
+
func_def = finfo[i].prosrc;
strcpy(func_lang, PQgetvalue(res, 0, i_lanname));
-
+
PQclear(res);
-
- if (dropSchema)
- {
- resetPQExpBuffer(q);
- appendPQExpBuffer(q, "DROP FUNCTION %s (", fmtId(finfo[i].proname, force_quotes));
- for (j = 0; j < finfo[i].nargs; j++)
- {
- char *typname;
-
- typname = findTypeByOid(tinfo, numTypes, finfo[i].argtypes[j]);
- appendPQExpBuffer(q, "%s%s",
- (j > 0) ? "," : "",
- fmtId(typname, false));
- }
- appendPQExpBuffer(q, ");\n");
- fputs(q->data, fout);
- }
-
- resetPQExpBuffer(q);
- appendPQExpBuffer(q, "CREATE FUNCTION %s (", fmtId(finfo[i].proname, force_quotes));
+
+ resetPQExpBuffer(fn);
+ appendPQExpBuffer(fn, "%s (", fmtId(finfo[i].proname, force_quotes));
for (j = 0; j < finfo[i].nargs; j++)
{
- char *typname;
+ char *typname;
typname = findTypeByOid(tinfo, numTypes, finfo[i].argtypes[j]);
- appendPQExpBuffer(q, "%s%s",
- (j > 0) ? "," : "",
- fmtId(typname, false));
+ appendPQExpBuffer(fn, "%s%s",
+ (j > 0) ? "," : "",
+ fmtId(typname, false));
appendPQExpBuffer(fnlist, "%s%s",
- (j > 0) ? "," : "",
- fmtId(typname, false));
+ (j > 0) ? "," : "",
+ fmtId(typname, false));
}
- appendPQExpBuffer(q, " ) RETURNS %s%s AS '%s' LANGUAGE '%s';\n",
+ appendPQExpBuffer(fn, ")");
+
+ resetPQExpBuffer(delqry);
+ appendPQExpBuffer(delqry, "DROP FUNCTION %s;\n", fn->data );
+
+ resetPQExpBuffer(q);
+ appendPQExpBuffer(q, "CREATE FUNCTION %s ", fn->data );
+ appendPQExpBuffer(q, "RETURNS %s%s AS '%s' LANGUAGE '%s';\n",
(finfo[i].retset) ? " SETOF " : "",
- fmtId(findTypeByOid(tinfo, numTypes, finfo[i].prorettype), false),
+ fmtId(findTypeByOid(tinfo, numTypes, finfo[i].prorettype), false),
func_def, func_lang);
- fputs(q->data, fout);
+ ArchiveEntry(fout, finfo[i].oid, fn->data, "FUNCTION", NULL, q->data, delqry->data,
+ finfo[i].usename, NULL, NULL);
/*** Dump Function Comments ***/
*
*/
void
-dumpOprs(FILE *fout, OprInfo *oprinfo, int numOperators,
+dumpOprs(Archive *fout, OprInfo *oprinfo, int numOperators,
TypeInfo *tinfo, int numTypes)
{
int i;
PQExpBuffer q = createPQExpBuffer();
+ PQExpBuffer delq = createPQExpBuffer();
PQExpBuffer leftarg = createPQExpBuffer();
PQExpBuffer rightarg = createPQExpBuffer();
PQExpBuffer commutator = createPQExpBuffer();
appendPQExpBuffer(sort2, ",\n\tSORT2 = %s ",
findOprByOid(oprinfo, numOperators, oprinfo[i].oprrsortop));
- becomeUser(fout, oprinfo[i].usename);
-
- if (dropSchema)
- {
- resetPQExpBuffer(q);
- appendPQExpBuffer(q, "DROP OPERATOR %s (%s", oprinfo[i].oprname,
+ resetPQExpBuffer(delq);
+ appendPQExpBuffer(delq, "DROP OPERATOR %s (%s", oprinfo[i].oprname,
fmtId(findTypeByOid(tinfo, numTypes, oprinfo[i].oprleft),
false));
- appendPQExpBuffer(q, ", %s);\n",
- fmtId(findTypeByOid(tinfo, numTypes, oprinfo[i].oprright),
+ appendPQExpBuffer(delq, ", %s);\n",
+ fmtId(findTypeByOid(tinfo, numTypes, oprinfo[i].oprright),
false));
- fputs(q->data, fout);
- }
resetPQExpBuffer(q);
appendPQExpBuffer(q,
commutator->data,
negator->data,
restrictor->data,
- (strcmp(oprinfo[i].oprcanhash, "t") == 0) ? ",\n\tHASHES" : "",
+ (strcmp(oprinfo[i].oprcanhash, "t") == 0) ? ",\n\tHASHES" : "",
join->data,
sort1->data,
sort2->data);
- fputs(q->data, fout);
+ ArchiveEntry(fout, oprinfo[i].oid, oprinfo[i].oprname, "OPERATOR", NULL,
+ q->data, delq->data, oprinfo[i].usename, NULL, NULL);
}
}
*
*/
void
-dumpAggs(FILE *fout, AggInfo *agginfo, int numAggs,
+dumpAggs(Archive *fout, AggInfo *agginfo, int numAggs,
TypeInfo *tinfo, int numTypes)
{
int i;
PQExpBuffer q = createPQExpBuffer();
+ PQExpBuffer delq = createPQExpBuffer();
+ PQExpBuffer aggSig = createPQExpBuffer();
PQExpBuffer sfunc1 = createPQExpBuffer();
PQExpBuffer sfunc2 = createPQExpBuffer();
PQExpBuffer basetype = createPQExpBuffer();
else
comma2[0] = '\0';
- becomeUser(fout, agginfo[i].usename);
+ resetPQExpBuffer(aggSig);
+ appendPQExpBuffer(aggSig, "%s %s", agginfo[i].aggname,
+ fmtId(findTypeByOid(tinfo, numTypes, agginfo[i].aggbasetype), false));
- if (dropSchema)
- {
- resetPQExpBuffer(q);
- appendPQExpBuffer(q, "DROP AGGREGATE %s %s;\n", agginfo[i].aggname,
- fmtId(findTypeByOid(tinfo, numTypes, agginfo[i].aggbasetype), false));
- fputs(q->data, fout);
- }
+ resetPQExpBuffer(delq);
+ appendPQExpBuffer(delq, "DROP AGGREGATE %s;\n", aggSig->data);
resetPQExpBuffer(q);
appendPQExpBuffer(q, "CREATE AGGREGATE %s ( %s %s%s %s%s %s );\n",
comma2,
finalfunc->data);
- fputs(q->data, fout);
+ ArchiveEntry(fout, agginfo[i].oid, aggSig->data, "AGGREGATE", NULL,
+ q->data, delq->data, agginfo[i].usename, NULL, NULL);
/*** Dump Aggregate Comments ***/
return strdup(aclbuf);
}
+/*
+ * The name says it all; a function to append a string is the dest
+ * is big enough. If not, it does a realloc.
+ */
+static void strcatalloc(char **dest, int *dSize, char *src)
+{
+ int dLen = strlen(*dest);
+ int sLen = strlen(src);
+ if ( (dLen + sLen) >= *dSize) {
+ *dSize = (dLen + sLen) * 2;
+ *dest = realloc(*dest, *dSize);
+ }
+ strcpy(*dest + dLen, src);
+}
+
+
/*
* dumpACL:
* Write out grant/revoke information
*/
static void
-dumpACL(FILE *fout, TableInfo tbinfo)
+dumpACL(Archive *fout, TableInfo tbinfo)
{
- const char *acls = tbinfo.relacl;
- char *aclbuf,
+ const char *acls = tbinfo.relacl;
+ char *aclbuf,
*tok,
*eqpos,
*priv;
+ char *sql;
+ char tmp[1024];
+ int sSize = 4096;
if (strlen(acls) == 0)
return; /* table has default permissions */
+ /*
+ * Allocate a larginsh buffer for the output SQL.
+ */
+ sql = (char*)malloc(sSize);
+
/*
* Revoke Default permissions for PUBLIC. Is this actually necessary,
* or is it just a waste of time?
*/
- fprintf(fout,
- "REVOKE ALL on %s from PUBLIC;\n",
+ sprintf(sql, "REVOKE ALL on %s from PUBLIC;\n",
fmtId(tbinfo.relname, force_quotes));
/* Make a working copy of acls so we can use strtok */
priv = GetPrivileges(eqpos + 1);
if (*priv)
{
- fprintf(fout,
- "GRANT %s on %s to ",
+ sprintf(tmp, "GRANT %s on %s to ",
priv, fmtId(tbinfo.relname, force_quotes));
+ strcatalloc(&sql, &sSize, tmp);
/*
* Note: fmtId() can only be called once per printf, so don't
if (eqpos == tok)
{
/* Empty left-hand side means "PUBLIC" */
- fprintf(fout, "PUBLIC;\n");
+ strcatalloc(&sql, &sSize, "PUBLIC;\n");
}
else
{
*eqpos = '\0'; /* it's ok to clobber aclbuf */
if (strncmp(tok, "group ", strlen("group ")) == 0)
- fprintf(fout, "GROUP %s;\n",
+ sprintf(tmp, "GROUP %s;\n",
fmtId(tok + strlen("group "), force_quotes));
else
- fprintf(fout, "%s;\n", fmtId(tok, force_quotes));
+ sprintf(tmp, "%s;\n", fmtId(tok, force_quotes));
+ strcatalloc(&sql, &sSize, tmp);
}
}
free(priv);
}
free(aclbuf);
+
+ ArchiveEntry(fout, tbinfo.oid, tbinfo.relname, "ACL", NULL, sql, "", "", NULL, NULL);
+
}
*/
void
-dumpTables(FILE *fout, TableInfo *tblinfo, int numTables,
+dumpTables(Archive *fout, TableInfo *tblinfo, int numTables,
InhInfo *inhinfo, int numInherits,
TypeInfo *tinfo, int numTypes, const char *tablename,
- const bool aclsSkip)
+ const bool aclsSkip, const bool oids,
+ const bool schemaOnly, const bool dataOnly)
{
int i,
j,
k;
PQExpBuffer q = createPQExpBuffer();
+ PQExpBuffer delq = createPQExpBuffer();
char *serialSeq = NULL; /* implicit sequence name created
* by SERIAL datatype */
const char *serialSeqSuffix = "_id_seq"; /* suffix for implicit
int precision;
int scale;
-
/* First - dump SEQUENCEs */
if (tablename)
{
if (!tablename || (!strcmp(tblinfo[i].relname, tablename))
|| (serialSeq && !strcmp(tblinfo[i].relname, serialSeq)))
{
- becomeUser(fout, tblinfo[i].usename);
+ /* becomeUser(fout, tblinfo[i].usename); */
dumpSequence(fout, tblinfo[i]);
if (!aclsSkip)
dumpACL(fout, tblinfo[i]);
parentRels = tblinfo[i].parentRels;
numParents = tblinfo[i].numParents;
- becomeUser(fout, tblinfo[i].usename);
-
- if (dropSchema)
- {
- resetPQExpBuffer(q);
- appendPQExpBuffer(q, "DROP TABLE %s;\n", fmtId(tblinfo[i].relname, force_quotes));
- fputs(q->data, fout);
- }
+ resetPQExpBuffer(delq);
+ appendPQExpBuffer(delq, "DROP TABLE %s;\n", fmtId(tblinfo[i].relname, force_quotes));
resetPQExpBuffer(q);
appendPQExpBuffer(q, "CREATE TABLE %s (\n\t", fmtId(tblinfo[i].relname, force_quotes));
}
appendPQExpBuffer(q, ";\n");
- fputs(q->data, fout);
- if (!aclsSkip)
+
+ if (!dataOnly) {
+ ArchiveEntry(fout, tblinfo[i].oid, fmtId(tblinfo[i].relname, false),
+ "TABLE", NULL, q->data, delq->data, tblinfo[i].usename,
+ NULL, NULL);
+ }
+
+ if (!dataOnly && !aclsSkip)
dumpACL(fout, tblinfo[i]);
/* Dump Field Comments */
* write out to fout all the user-define indices
*/
void
-dumpIndices(FILE *fout, IndInfo *indinfo, int numIndices,
+dumpIndices(Archive *fout, IndInfo *indinfo, int numIndices,
TableInfo *tblinfo, int numTables, const char *tablename)
{
int i,
int nclass;
PQExpBuffer q = createPQExpBuffer(),
+ delq = createPQExpBuffer(),
id1 = createPQExpBuffer(),
id2 = createPQExpBuffer();
PGresult *res;
res = PQexec(g_conn, q->data);
if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "dumpIndices(): SELECT (funcname) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "dumpIndices(): SELECT (funcname) failed. "
+ "Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
- funcname = strdup(PQgetvalue(res, 0,
- PQfnumber(res, "proname")));
+ funcname = strdup(PQgetvalue(res, 0, PQfnumber(res, "proname")));
PQclear(res);
}
res = PQexec(g_conn, q->data);
if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "dumpIndices(): SELECT (classname) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "dumpIndices(): SELECT (classname) failed. "
+ "Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
- classname[nclass] = strdup(PQgetvalue(res, 0,
- PQfnumber(res, "opcname")));
+ classname[nclass] = strdup(PQgetvalue(res, 0, PQfnumber(res, "opcname")));
PQclear(res);
}
* is not necessarily right but should answer 99% of the time.
* Would have to add owner name to IndInfo to do it right.
*/
- becomeUser(fout, tblinfo[tableInd].usename);
resetPQExpBuffer(id1);
resetPQExpBuffer(id2);
appendPQExpBuffer(id1, fmtId(indinfo[i].indexrelname, force_quotes));
appendPQExpBuffer(id2, fmtId(indinfo[i].indrelname, force_quotes));
- if (dropSchema)
- {
- resetPQExpBuffer(q);
- appendPQExpBuffer(q, "DROP INDEX %s;\n", id1->data);
- fputs(q->data, fout);
- }
+ resetPQExpBuffer(delq);
+ appendPQExpBuffer(delq, "DROP INDEX %s;\n", id1->data);
- fprintf(fout, "CREATE %s INDEX %s on %s using %s (",
- (strcmp(indinfo[i].indisunique, "t") == 0) ? "UNIQUE" : "",
+ resetPQExpBuffer(q);
+ appendPQExpBuffer(q, "CREATE %s INDEX %s on %s using %s (",
+ (strcmp(indinfo[i].indisunique, "t") == 0) ? "UNIQUE" : "",
id1->data,
id2->data,
indinfo[i].indamname);
if (funcname)
{
/* need 2 printf's here cuz fmtId has static return area */
- fprintf(fout, " %s", fmtId(funcname, false));
- fprintf(fout, " (%s) %s );\n", attlist->data, fmtId(classname[0], force_quotes));
+ appendPQExpBuffer(q, " %s", fmtId(funcname, false));
+ appendPQExpBuffer(q, " (%s) %s );\n", attlist->data,
+ fmtId(classname[0], force_quotes));
free(funcname);
free(classname[0]);
}
else
- fprintf(fout, " %s );\n", attlist->data);
+ appendPQExpBuffer(q, " %s );\n", attlist->data);
/* Dump Index Comments */
+ ArchiveEntry(fout, tblinfo[tableInd].oid, id1->data, "INDEX", NULL, q->data, delq->data,
+ tblinfo[tableInd].usename, NULL, NULL);
+
resetPQExpBuffer(q);
appendPQExpBuffer(q, "INDEX %s", id1->data);
dumpComment(fout, q->data, indinfo[i].indoid);
*/
static void
-setMaxOid(FILE *fout)
+setMaxOid(Archive *fout)
{
- PGresult *res;
- Oid max_oid;
+ PGresult *res;
+ Oid max_oid;
+ char sql[1024];
+ int pos;
res = PQexec(g_conn, "CREATE TABLE pgdump_oid (dummy int4)");
if (!res ||
PQresultStatus(res) != PGRES_COMMAND_OK)
{
- fprintf(stderr, "Can not create pgdump_oid table. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "Can not create pgdump_oid table. "
+ "Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
PQclear(res);
if (!res ||
PQresultStatus(res) != PGRES_COMMAND_OK)
{
- fprintf(stderr, "Can not insert into pgdump_oid table. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "Can not insert into pgdump_oid table. "
+ "Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
max_oid = atol(PQoidStatus(res));
if (!res ||
PQresultStatus(res) != PGRES_COMMAND_OK)
{
- fprintf(stderr, "Can not drop pgdump_oid table. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
+ fprintf(stderr, "Can not drop pgdump_oid table. "
+ "Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
PQclear(res);
if (g_verbose)
fprintf(stderr, "%s maximum system oid is %u %s\n",
g_comment_start, max_oid, g_comment_end);
- fprintf(fout, "CREATE TABLE pgdump_oid (dummy int4);\n");
- fprintf(fout, "COPY pgdump_oid WITH OIDS FROM stdin;\n");
- fprintf(fout, "%-d\t0\n", max_oid);
- fprintf(fout, "\\.\n");
- fprintf(fout, "DROP TABLE pgdump_oid;\n");
+ pos = snprintf(sql, 1024, "CREATE TABLE pgdump_oid (dummy int4);\n");
+ pos = pos + snprintf(sql+pos, 1024-pos, "COPY pgdump_oid WITH OIDS FROM stdin;\n");
+ pos = pos + snprintf(sql+pos, 1024-pos, "%-d\t0\n", max_oid);
+ pos = pos + snprintf(sql+pos, 1024-pos, "\\.\n");
+ pos = pos + snprintf(sql+pos, 1024-pos, "DROP TABLE pgdump_oid;\n");
+
+ ArchiveEntry(fout, "0", "Max OID", "<Init>", NULL, sql, "","", NULL, NULL);
}
/*
static void
-dumpSequence(FILE *fout, TableInfo tbinfo)
+dumpSequence(Archive *fout, TableInfo tbinfo)
{
PGresult *res;
int4 last,
called;
const char *t;
PQExpBuffer query = createPQExpBuffer();
+ PQExpBuffer delqry = createPQExpBuffer();
appendPQExpBuffer(query,
"SELECT sequence_name, last_value, increment_by, max_value, "
res = PQexec(g_conn, query->data);
if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
{
- fprintf(stderr, "dumpSequence(%s): SELECT failed. Explanation from backend: '%s'.\n", tbinfo.relname, PQerrorMessage(g_conn));
+ fprintf(stderr, "dumpSequence(%s): SELECT failed. "
+ "Explanation from backend: '%s'.\n", tbinfo.relname, PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
PQclear(res);
- if (dropSchema)
- {
- resetPQExpBuffer(query);
- appendPQExpBuffer(query, "DROP SEQUENCE %s;\n", fmtId(tbinfo.relname, force_quotes));
- fputs(query->data, fout);
- }
+ resetPQExpBuffer(delqry);
+ appendPQExpBuffer(delqry, "DROP SEQUENCE %s;\n", fmtId(tbinfo.relname, force_quotes));
resetPQExpBuffer(query);
appendPQExpBuffer(query,
"CREATE SEQUENCE %s start %d increment %d maxvalue %d "
"minvalue %d cache %d %s;\n",
- fmtId(tbinfo.relname, force_quotes), last, incby, maxv, minv, cache,
+ fmtId(tbinfo.relname, force_quotes), last, incby, maxv, minv, cache,
(cycled == 't') ? "cycle" : "");
- fputs(query->data, fout);
+ if (called != 'f') {
+ appendPQExpBuffer(query, "SELECT nextval ('%s');\n", fmtId(tbinfo.relname, force_quotes));
+ }
+
+ ArchiveEntry(fout, tbinfo.oid, fmtId(tbinfo.relname, force_quotes), "SEQUENCE", NULL,
+ query->data, delqry->data, tbinfo.usename, NULL, NULL);
/* Dump Sequence Comments */
appendPQExpBuffer(query, "SEQUENCE %s", fmtId(tbinfo.relname, force_quotes));
dumpComment(fout, query->data, tbinfo.oid);
- if (called == 'f')
- return; /* nothing to do more */
-
- resetPQExpBuffer(query);
- appendPQExpBuffer(query, "SELECT nextval ('%s');\n", fmtId(tbinfo.relname, force_quotes));
- fputs(query->data, fout);
-
}
static void
-dumpTriggers(FILE *fout, const char *tablename,
+dumpTriggers(Archive *fout, const char *tablename,
TableInfo *tblinfo, int numTables)
{
int i,
continue;
for (j = 0; j < tblinfo[i].ntrig; j++)
{
- becomeUser(fout, tblinfo[i].usename);
- fputs(tblinfo[i].triggers[j], fout);
- dumpComment(fout, tblinfo[i].trcomments[j], tblinfo[i].troids[j]);
+ ArchiveEntry(fout, tblinfo[i].triggers[j].oid, tblinfo[i].triggers[j].tgname,
+ "TRIGGER", NULL, tblinfo[i].triggers[j].tgsrc, "",
+ tblinfo[i].usename, NULL, NULL);
+ dumpComment(fout, tblinfo[i].triggers[j].tgcomment, tblinfo[i].triggers[j].oid);
}
}
}
static void
-dumpRules(FILE *fout, const char *tablename,
+dumpRules(Archive *fout, const char *tablename,
TableInfo *tblinfo, int numTables)
{
PGresult *res;
for (i = 0; i < nrules; i++)
{
- fprintf(fout, "%s\n", PQgetvalue(res, i, i_definition));
+ ArchiveEntry(fout, PQgetvalue(res, i, i_oid), PQgetvalue(res, i, i_rulename),
+ "RULE", NULL, PQgetvalue(res, i, i_definition),
+ "", "", NULL, NULL);
/* Dump rule comments */
}
}
-
-/* Issue a psql \connect command to become the specified user.
- * We want to do this only if we are dumping ACLs,
- * and only if the new username is different from the last one
- * (to avoid the overhead of useless backend launches).
- */
-
-static void
-becomeUser(FILE *fout, const char *username)
-{
- static const char *lastusername = "";
-
- if (aclsSkip)
- return;
-
- if (strcmp(lastusername, username) == 0)
- return;
-
- fprintf(fout, "\\connect - %s\n", username);
-
- lastusername = username;
-}
* Portions Copyright (c) 1996-2000, PostgreSQL, Inc
* Portions Copyright (c) 1994, Regents of the University of California
*
- * $Id: pg_dump.h,v 1.48 2000/04/12 17:16:15 momjian Exp $
+ * $Id: pg_dump.h,v 1.49 2000/07/04 14:25:28 momjian Exp $
*
* Modifications - 6/12/96 - dave@bensoft.com - version 1.13.dhb.2
*
#include "pqexpbuffer.h"
#include "catalog/pg_index.h"
+#include "pg_backup.h"
/* The data structures used to store system catalog information */
int dumped; /* 1 if already dumped */
} FuncInfo;
+typedef struct _trigInfo
+{
+ char *oid;
+ char *tgname;
+ char *tgsrc;
+ char *tgdel;
+ char *tgcomment;
+} TrigInfo;
+
typedef struct _tableInfo
{
char *oid;
int ncheck; /* # of CHECK expressions */
char **check_expr; /* [CONSTRAINT name] CHECK expressions */
int ntrig; /* # of triggers */
- char **triggers; /* CREATE TRIGGER ... */
- char **trcomments; /* COMMENT ON TRIGGER ... */
- char **troids; /* TRIGGER oids */
+ TrigInfo *triggers; /* Triggers on the table */
char *primary_key; /* PRIMARY KEY of the table, if any */
} TableInfo;
extern bool g_force_quotes; /* double-quotes for identifiers flag */
extern bool g_verbose; /* verbose flag */
extern int g_last_builtin_oid; /* value of the last builtin oid */
-extern FILE *g_fout; /* the script file */
+extern Archive *g_fout; /* the script file */
/* placeholders for comment starting and ending delimiters */
extern char g_comment_start[10];
* common utility functions
*/
-extern TableInfo *dumpSchema(FILE *fout,
+extern TableInfo *dumpSchema(Archive *fout,
int *numTablesPtr,
const char *tablename,
- const bool acls);
-extern void dumpSchemaIdx(FILE *fout,
+ const bool acls,
+ const bool oids,
+ const bool schemaOnly,
+ const bool dataOnly);
+extern void dumpSchemaIdx(Archive *fout,
const char *tablename,
TableInfo *tblinfo,
int numTables);
extern InhInfo *getInherits(int *numInherits);
extern void getTableAttrs(TableInfo *tbinfo, int numTables);
extern IndInfo *getIndices(int *numIndices);
-extern void dumpDBComment(FILE *outfile);
-extern void dumpTypes(FILE *fout, FuncInfo *finfo, int numFuncs,
+extern void dumpDBComment(Archive *outfile);
+extern void dumpTypes(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes);
-extern void dumpProcLangs(FILE *fout, FuncInfo *finfo, int numFuncs,
+extern void dumpProcLangs(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes);
-extern void dumpFuncs(FILE *fout, FuncInfo *finfo, int numFuncs,
+extern void dumpFuncs(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes);
-extern void dumpAggs(FILE *fout, AggInfo *agginfo, int numAggregates,
+extern void dumpAggs(Archive *fout, AggInfo *agginfo, int numAggregates,
TypeInfo *tinfo, int numTypes);
-extern void dumpOprs(FILE *fout, OprInfo *agginfo, int numOperators,
+extern void dumpOprs(Archive *fout, OprInfo *agginfo, int numOperators,
TypeInfo *tinfo, int numTypes);
-extern void dumpTables(FILE *fout, TableInfo *tbinfo, int numTables,
+extern void dumpTables(Archive *fout, TableInfo *tbinfo, int numTables,
InhInfo *inhinfo, int numInherits,
TypeInfo *tinfo, int numTypes, const char *tablename,
- const bool acls);
-extern void dumpIndices(FILE *fout, IndInfo *indinfo, int numIndices,
+ const bool acls, const bool oids,
+ const bool schemaOnly, const bool dataOnly);
+extern void dumpIndices(Archive *fout, IndInfo *indinfo, int numIndices,
TableInfo *tbinfo, int numTables, const char *tablename);
extern const char *fmtId(const char *identifier, bool force_quotes);
# and "pg_group" tables, which belong to the whole installation rather
# than any one individual database.
#
-# $Header: /cvsroot/pgsql/src/bin/pg_dump/Attic/pg_dumpall.sh,v 1.1 2000/07/03 16:35:39 petere Exp $
+# $Header: /cvsroot/pgsql/src/bin/pg_dump/Attic/pg_dumpall.sh,v 1.2 2000/07/04 14:25:28 momjian Exp $
CMDNAME=`basename $0`
PSQL="${PGPATH}/psql $connectopts"
-PGDUMP="${PGPATH}/pg_dump $connectopts $pgdumpextraopts"
+PGDUMP="${PGPATH}/pg_dump $connectopts $pgdumpextraopts -Fp"
echo "--"
--- /dev/null
+/*-------------------------------------------------------------------------\r
+ *\r
+ * pg_restore.c\r
+ * pg_restore is an utility extracting postgres database definitions\r
+ * from a backup archive created by pg_dump using the archiver \r
+ * interface.\r
+ *\r
+ * pg_restore will read the backup archive and\r
+ * dump out a script that reproduces\r
+ * the schema of the database in terms of\r
+ * user-defined types\r
+ * user-defined functions\r
+ * tables\r
+ * indices\r
+ * aggregates\r
+ * operators\r
+ * ACL - grant/revoke\r
+ *\r
+ * the output script is SQL that is understood by PostgreSQL\r
+ *\r
+ * Basic process in a restore operation is:\r
+ * \r
+ * Open the Archive and read the TOC.\r
+ * Set flags in TOC entries, and *maybe* reorder them.\r
+ * Generate script to stdout\r
+ * Exit\r
+ *\r
+ * Copyright (c) 2000, Philip Warner\r
+ * Rights are granted to use this software in any way so long\r
+ * as this notice is not removed.\r
+ *\r
+ * The author is not responsible for loss or damages that may\r
+ * result from it's use.\r
+ *\r
+ *\r
+ * IDENTIFICATION\r
+ *\r
+ * Modifications - 28-Jun-2000 - pjw@rhyme.com.au\r
+ *\r
+ * Initial version. Command processing taken from original pg_dump.\r
+ *\r
+ *-------------------------------------------------------------------------\r
+ */\r
+\r
+#include <stdlib.h>\r
+#include <stdio.h>\r
+#include <string.h>\r
+#include <ctype.h>\r
+\r
+\r
+/*\r
+#include "postgres.h"\r
+#include "access/htup.h"\r
+#include "catalog/pg_type.h"\r
+#include "catalog/pg_language.h"\r
+#include "catalog/pg_index.h"\r
+#include "catalog/pg_trigger.h"\r
+#include "libpq-fe.h"\r
+*/\r
+\r
+#include "pg_backup.h"\r
+\r
+#ifndef HAVE_STRDUP\r
+#include "strdup.h"\r
+#endif\r
+\r
+#ifdef HAVE_TERMIOS_H\r
+#include <termios.h>\r
+#endif\r
+\r
+#ifdef HAVE_GETOPT_H \r
+#include <getopt.h>\r
+#else\r
+#include <unistd.h>\r
+#endif\r
+\r
+/* Forward decls */\r
+static void usage(const char *progname);\r
+static char* _cleanupName(char* name);\r
+\r
+typedef struct option optType;\r
+\r
+#ifdef HAVE_GETOPT_H\r
+struct option cmdopts[] = { \r
+ { "clean", 0, NULL, 'c' },\r
+ { "data-only", 0, NULL, 'a' },\r
+ { "file", 1, NULL, 'f' },\r
+ { "format", 1, NULL, 'F' },\r
+ { "function", 2, NULL, 'p' },\r
+ { "index", 2, NULL, 'i'},\r
+ { "list", 0, NULL, 'l'},\r
+ { "no-acl", 0, NULL, 'x' },\r
+ { "oid-order", 0, NULL, 'o'},\r
+ { "orig-order", 0, NULL, 'O' },\r
+ { "rearrange", 0, NULL, 'r'},\r
+ { "schema-only", 0, NULL, 's' },\r
+ { "table", 2, NULL, 't'},\r
+ { "trigger", 2, NULL, 'T' },\r
+ { "use-list", 1, NULL, 'u'},\r
+ { "verbose", 0, NULL, 'v' },\r
+ { NULL, 0, NULL, 0}\r
+ };\r
+#endif\r
+\r
+int main(int argc, char **argv)\r
+{\r
+ RestoreOptions *opts;\r
+ char *progname;\r
+ int c;\r
+ Archive* AH;\r
+ char *fileSpec;\r
+\r
+ opts = NewRestoreOptions();\r
+\r
+ progname = *argv;\r
+\r
+#ifdef HAVE_GETOPT_LONG\r
+ while ((c = getopt_long(argc, argv, "acf:F:i:loOp:st:T:u:vx", cmdopts, NULL)) != EOF)\r
+#else\r
+ while ((c = getopt(argc, argv, "acf:F:i:loOp:st:T:u:vx")) != -1)\r
+#endif\r
+ {\r
+ switch (c)\r
+ {\r
+ case 'a': /* Dump data only */\r
+ opts->dataOnly = 1;\r
+ break;\r
+ case 'c': /* clean (i.e., drop) schema prior to\r
+ * create */\r
+ opts->dropSchema = 1;\r
+ break;\r
+ case 'f': /* output file name */\r
+ opts->filename = strdup(optarg);\r
+ break;\r
+ case 'F':\r
+ if (strlen(optarg) != 0) \r
+ opts->formatName = strdup(optarg);\r
+ break;\r
+ case 'o':\r
+ opts->oidOrder = 1;\r
+ break;\r
+ case 'O':\r
+ opts->origOrder = 1;\r
+ break;\r
+ case 'r':\r
+ opts->rearrange = 1;\r
+ break;\r
+\r
+ case 'p': /* Function */\r
+ opts->selTypes = 1;\r
+ opts->selFunction = 1;\r
+ opts->functionNames = _cleanupName(optarg);\r
+ break;\r
+ case 'i': /* Index */\r
+ opts->selTypes = 1;\r
+ opts->selIndex = 1;\r
+ opts->indexNames = _cleanupName(optarg);\r
+ break;\r
+ case 'T': /* Trigger */\r
+ opts->selTypes = 1;\r
+ opts->selTrigger = 1;\r
+ opts->triggerNames = _cleanupName(optarg);\r
+ break;\r
+ case 's': /* dump schema only */\r
+ opts->schemaOnly = 1;\r
+ break;\r
+ case 't': /* Dump data for this table only */\r
+ opts->selTypes = 1;\r
+ opts->selTable = 1;\r
+ opts->tableNames = _cleanupName(optarg);\r
+ break;\r
+ case 'l': /* Dump the TOC summary */\r
+ opts->tocSummary = 1;\r
+ break;\r
+\r
+ case 'u': /* input TOC summary file name */\r
+ opts->tocFile = strdup(optarg);\r
+ break;\r
+\r
+ case 'v': /* verbose */\r
+ opts->verbose = 1;\r
+ break;\r
+ case 'x': /* skip ACL dump */\r
+ opts->aclsSkip = 1;\r
+ break;\r
+ default:\r
+ usage(progname);\r
+ break;\r
+ }\r
+ }\r
+\r
+ if (optind < argc) {\r
+ fileSpec = argv[optind];\r
+ } else {\r
+ fileSpec = NULL;\r
+ }\r
+\r
+ if (opts->formatName) { \r
+\r
+ switch (opts->formatName[0]) {\r
+\r
+ case 'c':\r
+ case 'C':\r
+ opts->format = archCustom;\r
+ break;\r
+\r
+ case 'f':\r
+ case 'F':\r
+ opts->format = archFiles;\r
+ break;\r
+\r
+ default:\r
+ fprintf(stderr, "%s: Unknown archive format '%s', please specify 'f' or 'c'\n", progname, opts->formatName);\r
+ exit (1);\r
+ }\r
+ }\r
+\r
+ AH = OpenArchive(fileSpec, opts->format);\r
+\r
+ if (opts->tocFile)\r
+ SortTocFromFile(AH, opts);\r
+\r
+ if (opts->oidOrder)\r
+ SortTocByOID(AH);\r
+ else if (opts->origOrder)\r
+ SortTocByID(AH);\r
+\r
+ if (opts->rearrange) {\r
+ MoveToEnd(AH, "TABLE DATA");\r
+ MoveToEnd(AH, "INDEX");\r
+ MoveToEnd(AH, "TRIGGER");\r
+ MoveToEnd(AH, "RULE");\r
+ MoveToEnd(AH, "ACL");\r
+ }\r
+\r
+ if (opts->tocSummary) {\r
+ PrintTOCSummary(AH, opts);\r
+ } else {\r
+ RestoreArchive(AH, opts);\r
+ }\r
+\r
+ CloseArchive(AH);\r
+\r
+ return 1;\r
+}\r
+\r
+static void usage(const char *progname)\r
+{\r
+#ifdef HAVE_GETOPT_LONG\r
+ fprintf(stderr,\r
+ "usage: %s [options] [backup file]\n"\r
+ " -a, --data-only \t dump out only the data, no schema\n"\r
+ " -c, --clean \t clean(drop) schema prior to create\n"\r
+ " -f filename \t script output filename\n"\r
+ " -F, --format {c|f} \t specify backup file format\n"\r
+ " -p, --function[=name] \t dump functions or named function\n"\r
+ " -i, --index[=name] \t dump indexes or named index\n"\r
+ " -l, --list \t dump summarized TOC for this file\n"\r
+ " -o, --oid-order \t dump in oid order\n"\r
+ " -O, --orig-order \t dump in original dump order\n"\r
+ " -r, --rearrange \t rearrange output to put indexes etc at end\n"\r
+ " -s, --schema-only \t dump out only the schema, no data\n"\r
+ " -t [table], --table[=table] \t dump for this table only\n"\r
+ " -T, --trigger[=name] \t dump triggers or named trigger\n"\r
+ " -u, --use-list filename \t use specified TOC for ordering output from this file\n"\r
+ " -v \t verbose\n"\r
+ " -x, --no-acl \t skip dumping of ACLs (grant/revoke)\n"\r
+ , progname);\r
+#else\r
+ fprintf(stderr,\r
+ "usage: %s [options] [backup file]\n"\r
+ " -a \t dump out only the data, no schema\n"\r
+ " -c \t clean(drop) schema prior to create\n"\r
+ " -f filename NOT IMPLEMENTED \t script output filename\n"\r
+ " -F {c|f} \t specify backup file format\n"\r
+ " -p name \t dump functions or named function\n"\r
+ " -i name \t dump indexes or named index\n"\r
+ " -l \t dump summarized TOC for this file\n"\r
+ " -o \t dump in oid order\n"\r
+ " -O \t dump in original dump order\n"\r
+ " -r \t rearrange output to put indexes etc at end\n"\r
+ " -s \t dump out only the schema, no data\n"\r
+ " -t name \t dump for this table only\n"\r
+ " -T name \t dump triggers or named trigger\n"\r
+ " -u filename \t use specified TOC for ordering output from this file\n"\r
+ " -v \t verbose\n"\r
+ " -x \t skip dumping of ACLs (grant/revoke)\n"\r
+ , progname);\r
+#endif\r
+ fprintf(stderr,\r
+ "\nIf [backup file] is not supplied, then standard input "\r
+ "is used.\n");\r
+ fprintf(stderr, "\n");\r
+\r
+ exit(1);\r
+}\r
+\r
+static char* _cleanupName(char* name)\r
+{\r
+ int i;\r
+\r
+ if (!name)\r
+ return NULL;\r
+\r
+ if (strlen(name) == 0)\r
+ return NULL;\r
+\r
+ name = strdup(name);\r
+\r
+ if (name[0] == '"')\r
+ {\r
+ strcpy(name, &name[1]);\r
+ if (*(name + strlen(name) - 1) == '"')\r
+ *(name + strlen(name) - 1) = '\0';\r
+ }\r
+ /* otherwise, convert table name to lowercase... */\r
+ else\r
+ {\r
+ for (i = 0; name[i]; i++)\r
+ if (isascii((unsigned char) name[i]) && isupper(name[i]))\r
+ name[i] = tolower(name[i]);\r
+ }\r
+ return name;\r
+}\r
+\r