From e0a5d6ce4652bf9c31e43ff5306f8362c6e2eece Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Tue, 4 Dec 2001 06:20:53 +0000 Subject: Update FAQ_DEV. --- doc/src/FAQ/FAQ_DEV.html | 415 ++++++++++++++++++++++++----------------------- 1 file changed, 214 insertions(+), 201 deletions(-) (limited to 'doc/src') diff --git a/doc/src/FAQ/FAQ_DEV.html b/doc/src/FAQ/FAQ_DEV.html index 1eb05a16e49..dd99a25615e 100644 --- a/doc/src/FAQ/FAQ_DEV.html +++ b/doc/src/FAQ/FAQ_DEV.html @@ -12,9 +12,8 @@
Last updated: Tue Dec 4 01:20:03 EST 2001
+Last updated: Tue Dec 4 01:20:03 EST 2001
-Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
This was written by Lamar Owen:
-2001-05-03 - -
As to how the RPMs are built -- to answer that question sanely requires -me to know how much experience you have with the whole RPM paradigm. -'How is the RPM built?' is a multifaceted question. The obvious simple -answer is that I maintain: - -
- 1.) A set of patches to make certain portions of the source - tree 'behave' in the different environment of the RPMset; -
2.) The initscript; -
3.) Any other ancilliary scripts and files; -
4.) A README.rpm-dist document that tries to adequately document - both the differences between the RPM build and the WHY of the - differences, as well as useful RPM environment operations - (like, using syslog, upgrading, getting postmaster to - start at OS boot, etc); -
5.) The spec file that throws it all together. This is not a - trivial undertaking in a package of this size. - -
I then download and build on as many different canonical distributions -as I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1 on -my personal hardware. Occasionally I receive opportunity from certain -commercial enterprises such as Great Bridge and PostgreSQL, Inc. to -build on other distributions. - -
I test the build by installing the resulting packages and running the -regression tests. Once the build passes these tests, I upload to the -postgresql.org ftp server and make a release announcement. I am also -responsible for maintaining the RPM download area on the ftp site. - -
You'll notice I said 'canonical' distributions above. That simply means -that the machine is as stock 'out of the box' as practical -- that is, -everything (except select few programs) on these boxen are installed by -RPM; only official Red Hat released RPMs are used (except in unusual -circumstances involving software that will not alter the build -- for -example, installing a newer non-RedHat version of the Dia diagramming -package is OK -- installing Python 2.1 on the box that has Python 1.5.2 -installed is not, as that alters the PostgreSQL build). The RPM as -uploaded is built to as close to out-of-the-box pristine as is -possible. Only the standard released 'official to that release' -compiler is used -- and only the standard official kernel is used as -well. - -
For a time I built on Mandrake for RedHat consumption -- no more. -Nonstandard RPM building systems are worse than useless. Which is not -to say that Mandrake is useless! By no means is Mandrake useless -- -unless you are building Red Hat RPMs -- and Red Hat is useless if you're -trying to build Mandrake or SuSE RPMs, for that matter. But I would be -foolish to use 'Lamar Owen's Super Special RPM Blend Distro 0.1.2' to -build for public consumption! :-) - -
I _do_ attempt to make the _source_ RPM compatible with as many -distributions as possible -- however, since I have limited resources (as -a volunteer RPM maintainer) I am limited as to the amount of testing -said build will get on other distributions, architectures, or systems. - -
And, while I understand people's desire to immediately upgrade to the -newest version, realize that I do this as a side interest -- I have a -regular, full-time job as a broadcast -engineer/webmaster/sysadmin/Technical Director which occasionally -prevents me from making timely RPM releases. This happened during the -early part of the 7.1 beta cycle -- but I believe I was pretty much on -the ball for the Release Candidates and the final release. - -
I am working towards a more open RPM distribution -- I would dearly love -to more fully document the process and put everything into CVS -- once I -figure out how I want to represent things such as the spec file in a CVS -form. It makes no sense to maintain a changelog, for instance, in the -spec file in CVS when CVS does a better job of changelogs -- I will need -to write a tool to generate a real spec file from a CVS spec-source file -that would add version numbers, changelog entries, etc to the result -before building the RPM. IOW, I need to rethink the process -- and then -go through the motions of putting my long RPM history into CVS one -version at a time so that version history information isn't lost. - -
As to why all these files aren't part of the source tree, well, unless -there was a large cry for it to happen, I don't believe it should. -PostgreSQL is very platform-agnostic -- and I like that. Including the -RPM stuff as part of the Official Tarball (TM) would, IMHO, slant that -agnostic stance in a negative way. But maybe I'm too sensitive to -that. I'm not opposed to doing that if that is the consensus of the -core group -- and that would be a sneaky way to get the stuff into CVS -:-). But if the core group isn't thrilled with the idea (and my -instinct says they're not likely to be), I am opposed to the idea -- not -to keep the stuff to myself, but to not hinder the platform-neutral -stance. IMHO, of course. - -
Of course, there are many projects that DO include all the files -necessary to build RPMs from their Official Tarball (TM). + +
2001-05-03
+ +As to how the RPMs are built -- to answer that question sanely + requires me to know how much experience you have with the whole RPM + paradigm. 'How is the RPM built?' is a multifaceted question. The + obvious simple answer is that I maintain:
+ +1.) A set of patches to make certain portions of the source tree + 'behave' in the different environment of the RPMset;
+ +2.) The initscript;
+ +3.) Any other ancilliary scripts and files;
+ +4.) A README.rpm-dist document that tries to adequately document + both the differences between the RPM build and the WHY of the + differences, as well as useful RPM environment operations (like, + using syslog, upgrading, getting postmaster to start at OS boot, + etc);
+ +5.) The spec file that throws it all together. This is not a + trivial undertaking in a package of this size.
+ +I then download and build on as many different canonical + distributions as I can -- currently I am able to build on Red Hat + 6.2, 7.0, and 7.1 on my personal hardware. Occasionally I receive + opportunity from certain commercial enterprises such as Great + Bridge and PostgreSQL, Inc. to build on other distributions.
+ +I test the build by installing the resulting packages and + running the regression tests. Once the build passes these tests, I + upload to the postgresql.org ftp server and make a release + announcement. I am also responsible for maintaining the RPM + download area on the ftp site.
+ +You'll notice I said 'canonical' distributions above. That + simply means that the machine is as stock 'out of the box' as + practical -- that is, everything (except select few programs) on + these boxen are installed by RPM; only official Red Hat released + RPMs are used (except in unusual circumstances involving software + that will not alter the build -- for example, installing a newer + non-RedHat version of the Dia diagramming package is OK -- + installing Python 2.1 on the box that has Python 1.5.2 installed is + not, as that alters the PostgreSQL build). The RPM as uploaded is + built to as close to out-of-the-box pristine as is possible. Only + the standard released 'official to that release' compiler is used + -- and only the standard official kernel is used as well.
+ +For a time I built on Mandrake for RedHat consumption -- no + more. Nonstandard RPM building systems are worse than useless. + Which is not to say that Mandrake is useless! By no means is + Mandrake useless -- unless you are building Red Hat RPMs -- and Red + Hat is useless if you're trying to build Mandrake or SuSE RPMs, for + that matter. But I would be foolish to use 'Lamar Owen's Super + Special RPM Blend Distro 0.1.2' to build for public consumption! + :-)
+ +I _do_ attempt to make the _source_ RPM compatible with as many + distributions as possible -- however, since I have limited + resources (as a volunteer RPM maintainer) I am limited as to the + amount of testing said build will get on other distributions, + architectures, or systems.
+ +And, while I understand people's desire to immediately upgrade + to the newest version, realize that I do this as a side interest -- + I have a regular, full-time job as a broadcast + engineer/webmaster/sysadmin/Technical Director which occasionally + prevents me from making timely RPM releases. This happened during + the early part of the 7.1 beta cycle -- but I believe I was pretty + much on the ball for the Release Candidates and the final + release.
+ +I am working towards a more open RPM distribution -- I would + dearly love to more fully document the process and put everything + into CVS -- once I figure out how I want to represent things such + as the spec file in a CVS form. It makes no sense to maintain a + changelog, for instance, in the spec file in CVS when CVS does a + better job of changelogs -- I will need to write a tool to generate + a real spec file from a CVS spec-source file that would add version + numbers, changelog entries, etc to the result before building the + RPM. IOW, I need to rethink the process -- and then go through the + motions of putting my long RPM history into CVS one version at a + time so that version history information isn't lost.
+ +As to why all these files aren't part of the source tree, well, + unless there was a large cry for it to happen, I don't believe it + should. PostgreSQL is very platform-agnostic -- and I like that. + Including the RPM stuff as part of the Official Tarball (TM) would, + IMHO, slant that agnostic stance in a negative way. But maybe I'm + too sensitive to that. I'm not opposed to doing that if that is the + consensus of the core group -- and that would be a sneaky way to + get the stuff into CVS :-). But if the core group isn't thrilled + with the idea (and my instinct says they're not likely to be), I am + opposed to the idea -- not to keep the stuff to myself, but to not + hinder the platform-neutral stance. IMHO, of course.
+ +Of course, there are many projects that DO include all the files + necessary to build RPMs from their Official Tarball (TM).
This was written by Tom Lane:
--2001-05-07 - -
If you just do basic "cvs checkout", "cvs update", "cvs commit", then -you'll always be dealing with the HEAD version of the files in CVS. -That's what you want for development, but if you need to patch past -stable releases then you have to be able to access and update the -"branch" portions of our CVS repository. We normally fork off a branch -for a stable release just before starting the development cycle for the -next release. - -
The first thing you have to know is the branch name for the branch you -are interested in getting at. To do this, look at some long-lived file, -say the top-level HISTORY file, with "cvs status -v" to see what the -branch names are. (Thanks to Ian Lance Taylor for pointing out that -this is the easiest way to do it.) Typical branch names are: +
2001-05-07
+ +If you just do basic "cvs checkout", "cvs update", "cvs commit", + then you'll always be dealing with the HEAD version of the files in + CVS. That's what you want for development, but if you need to patch + past stable releases then you have to be able to access and update + the "branch" portions of our CVS repository. We normally fork off a + branch for a stable release just before starting the development + cycle for the next release.
+ +The first thing you have to know is the branch name for the + branch you are interested in getting at. To do this, look at some + long-lived file, say the top-level HISTORY file, with "cvs status + -v" to see what the branch names are. (Thanks to Ian Lance Taylor + for pointing out that this is the easiest way to do it.) Typical + branch names are:
REL7_1_STABLE REL7_0_PATCHES REL6_5_PATCHES-
OK, so how do you do work on a branch? By far the best way is to create -a separate checkout tree for the branch and do your work in that. Not -only is that the easiest way to deal with CVS, but you really need to -have the whole past tree available anyway to test your work. (And you -*better* test your work. Never forget that dot-releases tend to go out -with very little beta testing --- so whenever you commit an update to a -stable branch, you'd better be doubly sure that it's correct.) - -
Normally, to checkout the head branch, you just cd to the place you -want to contain the toplevel "pgsql" directory and say - +
OK, so how do you do work on a branch? By far the best way is to + create a separate checkout tree for the branch and do your work in + that. Not only is that the easiest way to deal with CVS, but you + really need to have the whole past tree available anyway to test + your work. (And you *better* test your work. Never forget that + dot-releases tend to go out with very little beta testing --- so + whenever you commit an update to a stable branch, you'd better be + doubly sure that it's correct.)
+ +Normally, to checkout the head branch, you just cd to the place + you want to contain the toplevel "pgsql" directory and say
cvs ... checkout pgsql-
To get a past branch, you cd to whereever you want it and say - +
To get a past branch, you cd to whereever you want it and + say
cvs ... checkout -r BRANCHNAME pgsql-
For example, just a couple days ago I did - +
For example, just a couple days ago I did
mkdir ~postgres/REL7_1 cd ~postgres/REL7_1 cvs ... checkout -r REL7_1_STABLE pgsql-
and now I have a maintenance copy of 7.1.*. +
and now I have a maintenance copy of 7.1.*.
-When you've done a checkout in this way, the branch name is "sticky": -CVS automatically knows that this directory tree is for the branch, -and whenever you do "cvs update" or "cvs commit" in this tree, you'll -fetch or store the latest version in the branch, not the head version. -Easy as can be. +
When you've done a checkout in this way, the branch name is + "sticky": CVS automatically knows that this directory tree is for + the branch, and whenever you do "cvs update" or "cvs commit" in + this tree, you'll fetch or store the latest version in the branch, + not the head version. Easy as can be.
-So, if you have a patch that needs to apply to both the head and a -recent stable branch, you have to make the edits and do the commit -twice, once in your development tree and once in your stable branch -tree. This is kind of a pain, which is why we don't normally fork -the tree right away after a major release --- we wait for a dot-release -or two, so that we won't have to double-patch the first wave of fixes. +
So, if you have a patch that needs to apply to both the head and + a recent stable branch, you have to make the edits and do the + commit twice, once in your development tree and once in your stable + branch tree. This is kind of a pain, which is why we don't normally + fork the tree right away after a major release --- we wait for a + dot-release or two, so that we won't have to double-patch the first + wave of fixes.
This was written by Lamar Owen:
--2001-06-22 - -
-> If someone was interested in joining the development team, where would
-
-> they...
-
-> - Find a description of the open source development process used by the
-
-> PostgreSQL team.
-
-
-
Read HACKERS for six months (or a full release cycle, whichever is longer). -Really. HACKERS _is_the process. The process is not well documented (AFAIK --- it may be somewhere that I am not aware of) -- and it changes continually. - -
-> - Find the development environment (OS, system, compilers, etc)
-
-> required to develop code.
-
-
-
Developers Corner on the website -has links to this information. The distribution tarball itself -includes all the extra tools and documents that go beyond a good -Unix-like development environment. In general, a modern unix with a -modern gcc, GNU make or equivalent, autoconf (of a particular version), -and good working knowledge of those tools are required. - -
-> - Find an area or two that needs some support.
-
-
-
The TODO list. - -
You've made the first step, by finding and subscribing to HACKERS. Once you -find an area to look at in the TODO, and have read the documentation on the -internals, etc, then you check out a current CVS,write what you are going to -write (keeping your CVS checkout up to date in the process), and make up a -patch (as a context diff only) and send to the PATCHES list, prefereably. - -
Discussion on the patch typically happens here. If the patch adds a major -feature, it would be a good idea to talk about it first on the HACKERS list, -in order to increase the chances of it being accepted, as well as toavoid -duplication of effort. Note that experienced developers with a proven track -record usually get the big jobs -- for more than one reason. Also note that -PostgreSQL is highly portable -- nonportable code will likely be dismissed -out of hand. - -
Once your contributions get accepted, things move from there. Typically, you -would be added as a developer on the list on the website when one of the -other developers recommends it. Membership on the steering committee is by -invitation only, by the other steering committee members, from what I have -gathered watching froma distance. - -
I make these statements from having watched the process for over two years. - -
To see a good example of how one goes about this, search the archives for the -name 'Tom Lane' and see what his first post consisted of, and where he took -things. In particular, note that this hasn't been _that_ long ago -- and his -bugfixing and general deep knowledge with this codebase is legendary. Take a -few days to read after him. And pay special attention to both the sheer -quantity as well as the painstaking quality of his work. Both are in high -demand. + +
2001-06-22
+ +> If someone was interested in joining the development team,
+ where would
+ > they...
+ > - Find a description of the open source development process
+ used by the
+ > PostgreSQL team.
+
Read HACKERS for six months (or a full release cycle, whichever + is longer). Really. HACKERS _is_the process. The process is not + well documented (AFAIK -- it may be somewhere that I am not aware + of) -- and it changes continually.
+ +> - Find the development environment (OS, system, compilers,
+ etc)
+ > required to develop code.
+
Developers Corner on the + website has links to this information. The distribution tarball + itself includes all the extra tools and documents that go beyond a + good Unix-like development environment. In general, a modern unix + with a modern gcc, GNU make or equivalent, autoconf (of a + particular version), and good working knowledge of those tools are + required.
+ +> - Find an area or two that needs some support.
+
The TODO list.
+ +You've made the first step, by finding and subscribing to + HACKERS. Once you find an area to look at in the TODO, and have + read the documentation on the internals, etc, then you check out a + current CVS,write what you are going to write (keeping your CVS + checkout up to date in the process), and make up a patch (as a + context diff only) and send to the PATCHES list, prefereably.
+ +Discussion on the patch typically happens here. If the patch + adds a major feature, it would be a good idea to talk about it + first on the HACKERS list, in order to increase the chances of it + being accepted, as well as toavoid duplication of effort. Note that + experienced developers with a proven track record usually get the + big jobs -- for more than one reason. Also note that PostgreSQL is + highly portable -- nonportable code will likely be dismissed out of + hand.
+ +Once your contributions get accepted, things move from there. + Typically, you would be added as a developer on the list on the + website when one of the other developers recommends it. Membership + on the steering committee is by invitation only, by the other + steering committee members, from what I have gathered watching + froma distance.
+ +I make these statements from having watched the process for over + two years.
+ +To see a good example of how one goes about this, search the + archives for the name 'Tom Lane' and see what his first post + consisted of, and where he took things. In particular, note that + this hasn't been _that_ long ago -- and his bugfixing and general + deep knowledge with this codebase is legendary. Take a few days to + read after him. And pay special attention to both the sheer + quantity as well as the painstaking quality of his work. Both are + in high demand.