[Top] [Contents] [Index] [ ? ]

Learning the GNU development tools

Preface  
Acknowledgements  
Copying  
1. Introduction to the GNU build system  
2. Writing Good Programs  
3. Using GNU Emacs  
4. Compiling with Makefiles  
5. Using Automake and Autoconf  
6. Using Autotools  
7. C++ and Autoconf  
8. Fortran with Autoconf  
9. Maintaining Documentation  
A. Legal issues with Free Software  
B. Philosophical issues  

 -- The Detailed Node Listing ---

Introduction to the GNU build system

1.1 Installing a GNU package  
1.2 Installing the GNU build system  
1.3 Hello world example  
1.4 Understanding the hello world example  
1.5 Using configuration headers  
1.6 Maintaining the documentation files  
1.7 Organizing your project in subdirectories  
1.8 Hello world with an attitude  
1.9 Tracking version numbers  

Writing Good Programs

2.1 Why good code is important  
2.2 Choosing a good programming language  
2.3 Developing libraries  
2.4 Developing applications  
2.5 Free software is good software  
2.6 Invoking the `gpl' utility  
2.7 Inserting notices with Emacs  

Using GNU Emacs

3.1 Introduction to Emacs  
3.2 Installing GNU Emacs  
3.3 Configuring GNU Emacs  
3.4 Using vi emulation  
3.5 Using Emacs as an IDE  
3.6 Inserting copyright notices with Emacs  
3.7 Using Emacs as an email client  
3.8 Handling patches  
3.9 Further reading on Emacs  

Compiling with Makefiles

4.1 Direct compilation  
4.2 Enter Makefiles  
4.3 Problems with Makefiles and workarounds  
4.4 Building libraries  

Using Automake and Autoconf

5.1 Hello World revisited  
5.2 OLD Using configuration headers  
5.3 The building process  
5.4 Some general advice  
5.5 Standard organization with Automake  
5.6 Programs and Libraries with Automake  
5.7 General Automake principles  
5.8 Simple Automake examples  
5.9 Built sources  
5.10 Installation directories.  
5.11 Handling shell scripts  
5.12 Handling other obscurities  

Using Autotools

6.1 Introduction  
6.2 Compiler configuration with the LF macros  
6.3 The features of `LF_CPP_PORTABILITY'  
6.4 Writing portable C++  
6.5 Hello world revisited again  
6.6 Invoking `acmkdir'  
6.7 Handling Embedded text  
6.8 Handling very deep packages  

Fortran with Autoconf

8.1 Introduction to Fortran support  
8.2 Fortran compilers and linkage  
8.3 Walkthrough a simple example  
8.4 The gory details  
8.5 Portability problems with Fortran  

Maintaining Documentation

9.1 Writing proper manuals  
9.2 Introduction to Texinfo  
9.3 Markup in Texinfo  
9.4 GNU Emacs support for Texinfo  
9.5 Writing documentation with LaTeX  
9.6 Creating a LaTeX package  
9.7 Further reading about LaTeX  

Legal issues with Free Software

A.1 Understanding Copyright  
A.2 Other legal concerns  
A.3 Freeing your software  

Philosophical issues

B.1 Why software should not have owners  
B.2 Why free software needs free documentation  
B.3 Copyleft; Pragmatic Idealism  
B.4 The X Windows Trap  
B.5 Categories of software  
B.6 Confusing words  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

Preface

The purpose of this document is to introduce you to the GNU build system, and show you how to use it to write good code. It also discusses peripheral topics such as how to use GNU Emacs as a source code navigator, how to write good software, and the philosophical concerns behind the free software movement. The intended reader should be a software developer who knows his programming languages, and wants to learn how to package his programs in a way that follows the GNU coding standards.

This manual introduces you to the GNU build system and showes you how to develop high-quality

This manual shows you how to develop high-quality software on GNU using the GNU build system that conforms to the GNU coding standards. These techniques are also useful for software development on GNU/Linux and most variants of the Unix system. In fact, one of the reasons for the elaborate GNU build system was to make software portable between GNU and other similar operating systems. We also discuss peripheral topics such as how to use GNU Emacs as an IDE (integrated development environment), and the various practical, legal and philosophical concerns behind software development.

When we speak of the GNU build system we refer primarily to the following four packages:

The GNU build system has two goals. The first is to simplify the development of portable programs. The second is to simplify the building of programs that are distributed as source code. The first goal is achieved by the automatic generation of a `configure' shell script. The second goal is achieved by the automatic generation of Makefiles and other shell scripts that are typically used in the building process. This way the developer can concentrate on debugging his source code, instead of his overly complex Makefiles. And the installer can compile and install the program directly from the source code distribution by a simple and automatic procedure.

The GNU build system needs to be installed only when you are developing programs that are meant to be distributed. To build a program from distributed source code, you only need a working make, a compiler, a shell, and sometimes standard Unix utilities like sed, awk, yacc, lex. The objective is to make software installation as simple and as automatic as possible for the installer. Also, by setting up the GNU build system such that it creates programs that don't require the build system to be present during their installation, it becomes possible to use the build system to bootstrap itself.

Some tasks that are simplified by the GNU build system include:

The Autotools package complements the GNU build system by providing the following additional features:

Autotools is still under development and there may still be bugs. At the moment Autotools doesn't do shared libraries, but that will change in the future.

This effort began by my attempt to write a tutorial for Autoconf. It involved into "Learning Autoconf and Automake". Along the way I developed Autotools to deal with things that annoyed me or to cover needs from my own work. Ultimately I want this document to be both a unified introduction of the GNU build system as well as documentation for the Autotools package.

I believe that knowing these tools and having this know-how is very important, and should not be missed from engineering or science students who will one day go out and do software development for academic or industrial research. Many students are incredibly undertrained in software engineering and write a lot of bad code. This is very very sad because of all people, it is them that have the greatest need to write portable, robust and reliable code. I found from my own experience that moving away from Fortran and C, and towards C++ is the first step in writing better code. The second step is to use the sophisticated GNU build system and use it properly, as described in this document. Ultimately, I am hoping that this document will help people get over the learning curve of the second step, so they can be productive and ready to study the reference manuals that are distributed with all these tools.

This manual of course is still under construction. When I am done constructing it some paragraph somewhere will be inserted with the traditional run-down of summaries about each chapter. I write this manual in a highly non-linear way, so while it is under construction you will find that some parts are better-developed than others. If you wish to contribute sections of the manual that I haven't written or haven't yet developed fully, please contact me.

Chapters 1,2,3,4 are okey. Chapter 5 is okey to, but needs a little more work. I removed the other chapters to minimize confusion, but the sources for them are still being distributed as part of the Autotools package for those that found them useful. The other chapters need a lot of rewriting and they would do more harm than good at this point to the unsuspecting reader. Please contact me if you have any suggestions for improving this manual.

Remarks by Marcelo: I am currentrly updating this manual to the last release of the autoconf/automake tools.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

Acknowledgements

This document and the Autotools package have originally been written by Eleftherios Gkioulekas. Many people have further contributed to this effort, directly or indirectly, in various way. Here is a list of these people. Please help me keep it complete and exempt of errors.

FIXME: I need to start keeping track of acknowledgements here


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

Copying

The following notice refers to the Autotools package with which this document is being distributed. The following notice refers to the Autotools package, which includes this documentation, as well as the source code for utilities like `acmkdir' and for additional Autoconf macros. The complete GNU build system involves other packages also, such as Autoconf, Automake, Libtool and a few other accessories. These packages are also free software, and you can obtain them from the Free Software Foundation. For details on doing so, please visit their web site http://www.fsf.org/. Although Autotools has been designed to work with the GNU build system, it is not yet an official part of the GNU project.

The Autotools package is "free"; this means that everyone is free to use it and free to redistribute it on a free basis. The Autotools package is not in the public domain; it is copyrighted and there are restrictions on its distribution, but these restrictions are designed to permit everything that a good cooperating citizen would want to do. What is not allowed is to try to prevent others from further sharing any version of this package that they might get from you.

Specifically, we want to make sure that you have the right to give away copies of the programs that relate to Autotools, that you receive source code or else can get it if you want it, that you can change these programs or use pieces of them in new free programs, and that you know you can do these things.

To make sure that everyone has such rights, we have to forbid you to deprive anyone else of these rights. For example, if you distribute copies of the Autotools-related code, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must tell them their rights.

Also, for our own protection, we must make certain that everyone finds out that there is no warranty for the programs that relate to Autotools. If these programs are modified by someone else and passed on, we want their recipients to know that what they have is not what we distributed, so that any problems introduced by others will not reflect on our reputation.

The precise conditions of the licenses for the programs currently being distributed that relate to Autotools are found in the General Public Licenses that accompany it.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1. Introduction to the GNU build system

1.1 Installing a GNU package  
1.2 Installing the GNU build system  
1.3 Hello world example  
1.4 Understanding the hello world example  
1.5 Using configuration headers  
1.6 Maintaining the documentation files  
1.7 Organizing your project in subdirectories  
1.8 Hello world with an attitude  
1.9 Tracking version numbers  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.1 Installing a GNU package

When you download an autoconfiguring package , it usually has a filename like: `foo-1.0.tar.gz' where the number is a version number. To install it, first you have to unpack the package to a directory someplace:
 
% gunzip foo-1.0.tar.gz
% tar xf foo-1.0.tar
Then you enter the directory and look for files like `README' or `INSTALL' that explain what you need to do. Almost always this amounts to typing the following commands:
 
% cd foo-1.0
% ./configure 
% make
% make check
% su
# make install
The `configure' command invokes a shell script that is distributed with the package that configures the package for you automatically. First it probes your system through a set of tests that allow it to determine things it needs to know, and then it uses this knowledge to generate automatically a `Makefile' from a template stored in a file called `Makefile.in'. When you invoke `make' with no argument, it executes the default target of the generated `Makefile'. That target will compile your source code, but will not install it. If your software comes with self-tests then you can compile and run them by typing `make check'. To install your software, you need to explicitly invoke `make' again with the target `install'. In order for `make' to work, you must make the directory where the `Makefile' is located the current directory.

During installation, the following files go to the following places:
 
Executables   -> /usr/local/bin
Libraries     -> /usr/local/lib
Header files  -> /usr/local/include
Man pages     -> /usr/local/man/man?
Info files    -> /usr/local/info
where `foo' is the name of the package. The `/usr/local' directory is called the prefix. The default prefix is always `/usr/local' but you can set it to anything you like when you call `configure' by adding a `--prefix' option. For example, suppose that you are not a privilidged user, so you can not install anything in `/usr/local', but you would still like to install the package for your own use. Then you can tell the `configure' script to install the package in your home directory `/home/username':
 
% ./configure --prefix=/home/username
% make
% make check
% make install
The `--prefix' argument tells `configure' where you want to install your package, and `configure' will take that into account and build the proper makefile automatically.

The `configure' script is compiled by `autoconf' from the contents of a file called `configure.ac'. These files are very easy to maintain, and in this tutorial we will teach you how they work. The `Makefile.in' file is also compiled by `automake' from a very high-level specification stored in a file called `Makefile.am'. The developer then only needs to maintain `configure.ac' and `Makefile.am'. As it turns out, these are so much easier to work with than Makefiles and so much more powerful, that you will find that you will not want to go back to Makefiles ever again once you get the hang of it.

In some packages, the `configure' script supports many more options than just `--prefix'. To find out about these options you should consult the file `INSTALL' and `README' that are traditionally distributed with the package, and also look at `configure''s self documenting facility:
 
% configure --help
Configure scripts can also report the version of Autoconf that generated them:
 
% configure --version
The makefiles generated by `automake' support a few more targets for undoing the installation process to various levels. More specifically:

Also, in the spirit of free redistributable code, there are targets for cutting a source code distribution. If you type
 
% make dist
it will rebuild the `foo-1.0.tar.gz' file that you started with. If you modified the source, the modifications will be included in the distribution (and you should probably change the version number). Before putting a distribution up on FTP, you can test its integrity with:
 
% make distcheck
This makes the distribution, then unpacks it in a temporary subdirectory and tries to configure it, build it, run the test-suite, and check if the installation script works. If everything is okey then you're told that your distribution is ready.

Once you go through this tutorial, you'll have the know-how you need to develop autoconfiguring programs with such powerful Makefiles.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.2 Installing the GNU build system

It is not unusual to be stuck on a system that does not have the GNU build tools installed. If you do have them installed, check to see whether you have the most recent versions. To do that type:
 
% autoconf --version
% automake --version
% libtool --version
If you don't have any of the above packages, you need to get a copy and install them on your computer. The distribution filenames for the GNU build tools, sans the version numbers, are:
 
autoconf-*.tar.gz
automake-*.tar.gz
libtool-*.tar.gz
Before installing these packages however, you will need to install the following needed packages from the FSF:
 
make-*.tar.gz
m4-*.tar.gz
texinfo-*.tar.gz
tar-*.shar.gz
You will need the GNU versions of make, m4 and tar even if your system already has native versions of these utilities. To check whether you do have the GNU versions see whether they accept the --version flag. If you have proprietory versions of make or m4, rename them and then install the GNU ones. You will also need to install Perl, the GNU C compiler, and the TeX typesetter.

It is important to note that the end user will only need a decent shell and a working make to build a source code distribution. The developer however needs to gather all of these tools in order to create the distribution.

Finally, to install Autotoolset begin by installing the following additional utilities from FSF:
 
bash-*.tar.gz
sharutils-*.tar.gz
and then install
 
autotools-*.tar.gz
You should be able to obtain a copy of Autotoolset from the same site from which you received this document.

The installation process, for most of these tools is rather straightforward:
 
% ./configure
% make
% make check
% make install
Most of these tools include documentation which you can build with
 
% make dvi
Exceptions to the rule are Perl, the GNU C compiler and TeX which have a more complicated installation procedure. However, you are very likely to have these installed already.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.3 Hello world example

To get started we will show you how to do the Hello world program using `autoconf' and `automake'. In the fine tradition of K&R, the C version of the hello world program is:
 
#include <stdio.h>
main()
{
 printf("Howdy world!\n");
}
Call this `hello.c' and place it under an empty directory. Simple programs like this can be compiled and ran directly with the following commands:
 
% gcc hello.c -o hello
% hello
If you are on a Unix system instead of a GNU system, your compiler might be called `cc' but the usage will be pretty much the same.

Now to do the same thing the `autoconf' and `automake' way create first the following files:

`Makefile.am'
 
bin_PROGRAMS = hello
hello_SOURCES = hello.c
`configure.ac'
 
AC_INIT([Hello Program],[1.0],
        [Author Of The Program <aotp@zxcv.com>],
        [hello])
AC_CONFIG_AUX_DIR(config)
AM_INIT_AUTOMAKE([dist-bzip2])
AC_PROG_CC
AC_PROG_INSTALL
AC_CONFIG_FILES([Makefile])
AC_OUTPUT

Now run `autoconf':
 
% aclocal
% autoconf
This will create the shell script `configure'. Next, create the config directory and run `automake':
 
% mkdir config
% automake -a
configure.ac: installing `config/install-sh'
configure.ac: installing `config/mkinstalldirs'
configure.ac: installing `config/missing'
Makefile.am: installing `./INSTALL'
Makefile.am: required file `./NEWS' not found
Makefile.am: required file `./README' not found
Makefile.am: installing `./COPYING'
Makefile.am: required file `./AUTHORS' not found
Makefile.am: required file `./ChangeLog' not found
Makefile.am: installing `config/depcomp'
The first time you do this, you get a spew of messages. It says that `automake' installed a whole bunch of cryptic stuff: `install-sh', `mkinstalldirs', `missing' and `decomp'. These are shell scripts that are needed by the makefiles that `automake' generates. You don't have to worry about what they do. It also complains that the following files are not around:
 
INSTALL, COPYING, NEWS, README, AUTHORS, ChangeLog
These files are required to be present by the GNU coding standards, and we discuss them in detail in 1.6 Maintaining the documentation files. At this point, it is important to at least touch these files, otherwise if you attempt to do a `make distcheck' it will deliberately fail. To make these files exist, type:
 
% touch NEWS README AUTHORS ChangeLog
and to make Automake aware of the existence of these files, rerun it:
 
% automake -a
You can assume that the generated `Makefile.in' is correct, only when Automake completes without any error messages.

Now the package is exactly in the state that the end-user will find it when person unpacks it from a source code distribution. For future reference, we will call this state autoconfiscated. Being in an autoconfiscated state means that, you are ready to type:
 
% ./configure
% make
% ./hello
to compile and run the hello world program. If you really want to install it, go ahead and call the `install' target:
 
# make install
To undo installation, that is to uninstall the package, do:
 
# make uninstall
If you didn't use the `--prefix' argument to point to your home directory, or a directory in which you have permissions to write and execute, you may need to be superuser to invoke the install and uninstall commands. If you feel like cutting a source code distribution, type:
 
make distcheck
This will create a file called `hello-0.1.tar.gz' in the current working directory that contains the project's source code, and test it out to see whether all the files are actually included and whether the source code passes the regression test suite.

In order to do all of the above, you need to use the GNU `gcc' compiler. Automake depends on `gcc''s ability to compute dependencies. Also, the `distcheck' target requires GNiU make and GNU tar.

The GNU build tools assume that there are two types of hats that people like to wear: the developer hat and the installer hat. Developers develop the source code and create the source code distribution. Installers just want to compile and install a source code distribution on their system. In the free software community, the same people get to wear either hat depending on what they want to do. If you are a developer, then you need to install the entire GNU build system, period (see section 1.2 Installing the GNU build system). If you are an installer, then all you need to compile and install a GNU package is a minimal `make' utility and a minimal shell. Any native Unix shell and `make' will work.

Both Autoconf and Automake take special steps to ensure that packages generated through the `distcheck' target can be easily installed with minimal tools. Autoconf generates `configure' shell scripts that use only portable Bourne shell features. (FIXME: Crossrefence: Portable shell programming) Automake ensures that the source code is in an autoconfiscated state when it is unpacked. It also regenerates the makefiles before adding them to the distribution, such that the installer targets (`all', `install', `uninstall', `check', `clean', `distclean') do not depend on GNU make features. The regenerated makefiles also do not use the `gcc' cruft to compute dependencies. Instead, precomputed dependencies are included in the regenerated makefiles, and the dependencies generation mechanism is disabled. This will allow the end-user to compile the package using a native compiler, if the GNU compiler is not available. For future reference we will call this the installer state.

Now wear your installer hat, and install `hello-0.1.tar.gz':
 
% gunzip hello-0.1.tar.gz
% tar xf hello-0.1.tar
% cd hello-0.1
% configure
% make 
% ./hello
This is the full circle. The distribution compiles, and by typing `make install' it installs. If you need to switch back to the developer hat, then you should rerun `automake' to get regenerate the makefiles.

When you run the `distcheck' target, `make' will create the source code distribution `hello-0.1.tar.gz' and it will pretend that it is an installer and see if it the distribution can be unpacked, configured, compiled and installed. It will also run the test suite, if one is bundled. If you would like to skip these tests, then run the `dist' target instead:
 
% make dist
Nevertheless, running `distcheck' is extremely helpful in debugging your build cruft. Please never release a distribution without getting it through `distcheck'. If you make daily distributions for off-site backup, please do pass them through `distcheck'. If there are files missing from your distribution, the `distcheck' target will detect them. If you fail to notice such problems, then your backups will be incomplete leading you to a false sense of security.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.4 Understanding the hello world example

When you made the `hello-0.1.tar.gz' distribution, most of the files were automatically generated. The only files that were actually written by your fingers were:

`hello.c'
 
#include <stdio.h>
main()
{
 printf("Howdy, world!\n");
}
`Makefile.am'
 
bin_PROGRAMS = hello
hello_SOURCES = hello.c
`configure.in'
 
AC_INIT(hello.cc)
AM_INIT_AUTOMAKE(hello,1.0)
AC_PROG_CC
AC_PROG_INSTALL
AC_OUTPUT(Makefile)
In this section we explain briefly what the files `Makefile.am' and `configure.in' mean.

The language of `Makefile.am' is a logic language. There is no explicit statement of execution. Only a statement of relations from which execution is inferred. On the other hand, the language of `configure.in' is procedural. Each line of `configure.in' is a command that is executed.

Seen in this light, here's what the `configure.in' commands shown do:

The `Makefile.am' is more obvious. The first line specifies the name of the program we are building. The second line specifies the source files that compose the program.

For now, as far as `configure.in' is concerned you need to know the following additional facts:

Now consider the commands that are used to build the hello world distribution:
 
% aclocal
% autoconf
% touch README AUTHORS NEWS ChangeLog
% automake -a 
% ./configure
% make
The first three commands bring the package in autoconfiscated state. The remaining two commands do the actual configuration and building. More specifically:

The `configure' script probes your platform and generates makefiles that are customized for building the source code on your platform. The specifics of how the probing should be done are programmed in `configure.in'. The generated makefiles are based on templates that appear in `Makefile.in' files. In order for these templates to cooperate with `configure' and produce makefiles that conform to the GNU coding standards they need to contain a tedious amount of boring stuff. This is where Automake comes in. Automakes generates the `Makefile.in' files from the more terse description in `Makefile.am'. As you have seen in the example, `Makefile.am' files can be very simple in simple cases. Once you have customized makefiles, your make utility takes over.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.5 Using configuration headers

If you inspect the output of `make' while compiling the hello world example, you will see that the generated Makefile is passing `-D' flags to the compiler that define the macros PACKAGE and VERSION. These macros are assigned the arguments that are passed to the AM_INIT_AUTOMAKE command in `configure.in'. One of the ways in which `configure' customizes your source code to a specific platform is by getting such C preprocessors defined. The definition is requested by appropriate commands in the `configure.in'. The AM_INIT_AUTOMAKE command is one such command.

The GNU build system by default implements C preprocessor macro definitions by passing `-D' flags to the compiler. When there is too many of these flags, we have two problems: the `make' output becomes hard to read, and more importantly we are running the risk of hitting the buffer limits of braindead Unix implementations of `make'. To work around this problem, you can ask Autoconf to use another approach in which all macros are defined in a special header file that is included in all the sources. This header file is called a configuration header.

A hello world program using this technique looks like this

`configure.in'
 
AC_INIT(hello.c)
AM_CONFIG_HEADER(config.h)
AM_INIT_AUTOMAKE(hello,0.1)
AC_PROG_CC
AC_PROG_INSTALL
AC_OUTPUT(Makefile)
`Makefile.am'
 
bin_PROGRAMS = hello
hello_SOURCES = hello.c
`hello.c'
 
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif

#include <stdio.h>
main()
{
 printf("Howdy, pardner!\n");
}
To request the use of a configuration header we use the AM_CONFIG_HEADER command. The configuration header must be installed conditionally with the following three lines:
 
#if HAVE_CONFIG_H
#include <config.h>
#endif
It is important that `config.h' is the first thing that gets included. Now autoconfiscate the source code by typing:
 
% aclocal
% autoconf
% touch NEWS README AUTHORS ChangeLog
% autoheader
% automake -a
It is important to type these commands in the order shown. The difference between this, and what we did in 1.3 Hello world example is that we had to run a new program: `autoheader'. This program scans `configure.in' and generates a template file `config.h.in' listing all the C preprocessor macros that might be defined and comments that should accompany the macros describing what they do. When you run `configure', it will load in `config.h.in' and use it to generate the final `config.h' that will be used by the source code during compilation.

Now you can go ahead and build the program:
 
% configure
% make
gcc -DHAVE_CONFIG_H -I. -I. -I.   -g -O2 -c hello.c
gcc -g -O2  -o hello  hello.o 
Note that now instead of multiple -D flags, there is only one such flag passed: -DHAVE_CONFIG_H. Also, appropriate -I flags are passed to make sure that `hello.c' can find and include `config.h'. To test the distribution, type:
 
% make distcheck
......
========================
hello-0.1.tar.gz is ready for distribution
========================
and it should all work out.

The `config.h' files go a long way back in history. In the past, there used to be packages where you would have to manually edit `config.h' files and adjust the macros you wanted defined by hand. This made these packages very difficult to install because they required intimate knowledge of your operating system. For example, it was not unusual to see a comment saying "if your system has a broken vfork, then define this macro". Many installers found this frustrating because they didn't really know how to configure the esoteric details of the `config.h' files. With autoconfiscated source code all of these details can be taken care of automatically, shifting this burden from the installer to the developer where it belongs.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.6 Maintaining the documentation files

Every software project must have its own directory. A minimal "project" is the example that we described in 1.3 Hello world example. In general, even a minimal project must have the files:
 
README, INSTALL, AUTHORS, THANKS, NEWS, ChangeLog
Before distributing your source code, it is important to write the real contents of these files. In this section we give a summary overview on how these files should be maintained. For more details, please see the GNU coding standards as published by the FSF.

The `acmkdir' utility will automatically create templates for these files that you can start from.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.7 Organizing your project in subdirectories

If your program is very small, you can place all your files in the top-level directory, like we did in the Hello World example (see section 1.3 Hello world example). Such packages are called shallow.

In general, it is prefered to organize your package as a deep package. In a deep package, the documentation files
 
README, INSTALL, AUTHORS, THANKS, ChangeLog, COPYING
as well as the build cruft are placed at the top-level directory, and the rest of the files are placed in subdirectories. It is standard practice to use the following subdirectories:

`src'
The actual source code that gets compiled. Every library should have it's own subdirectory. Executables should get their own directory as well. If each executable corresponds only to one or two files then it is sensible to put them all under the same directory. If your executables need more source files, or they can be seperated in distinct classes of functionalities you may like to regroup them under multiple directories. Feel free to use your judgement on how to do this best. It is easiest to place the library test suites on the same directory with the library source code. If that does not sit well with you however, you should put the test suite for each library in subdirectories under that library's directory. It is a massively bad idea to put the test suites for different libraries under the same directory.
`lib'
An optional directory where you put portability-related source code. This is mainly replacement implementation for system calls that are unavailable on some systems. You can also put tools here that you commonly use accross many different packages, tools that are tool simple to just make libraries out of every one of them. Common files encountered here are files that replace system calls to the GNU C library that are not available in proprietary C libraries.
`doc'
A directory containing the documentation for your package. You have the creative freedom to present the documentation in any way that is effective. However the prefered way to document software is by using Texinfo. Texinfo has the advantage that you can produce both on-line help as well as nice printed books from the same source. Documentation is discussed in more detail in See section 9. Maintaining Documentation.
`m4'
A directory containing `m4' files that you package may need to install. These files define new `autoconf' macros that you should make available to other developers who want to use your libraries. This is discussed in more detail in FIXME: crossreference.
`intl'
A directory containing boilerplate portability source code that allows your program to speak in many human languages. The contents of this directory are automatically maintained by `gettext'. (FIXME: crossreference)
`po'
A directory containing message catalogs for your software package. This is where the maintainer places the translations of his software in multiple human languages. (FIXME: crossreference)
Automake makes it very easy to maintain multidirectory source code packages, so you shouldn't shy away from taking advantage of it. Multidirectory packages are more convenient for most projects.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.8 Hello world with an attitude

How to do it with acmkdir.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.9 Tracking version numbers

Common sense requires that you identify the various releases of your software package with a version number. If you use the GNU build system, then you indicate the name of the package and the version number in `configure.in' in the line that invokes the `AM_INIT_AUTOMAKE' macro. In the hello world example (see section 1.3 Hello world example) we used the following line to set the version number equal to 0.1:

 
AM_INIT_AUTOMAKE(hello,0.1)
You must increase your version number every time you publically release a new version of your program. Just before the release, it is a very good idea to update your `ChangeLog' and note the release of a new version. This way, when someone inspects your `ChangeLog', person will be able to determine what changes happened between two specific versions. We suggest that when you are about to make a release, that you use

 
% make distcheck
to build a distribution and apply the test suite to validate it. Once you get this to work, change your version number in `configure.in', record an entry in `ChangeLog' saying that you are cutting the new version, and without making any other changes do

 
% make dist
to rebuild the distribution without having to wait for the test suite to run all over again.

Most packages declare their version with two integers: a major number and a minor number that are separated by a dot in the middle. In our example above, the major number is 0 and the minor number is 1. The minor number should be updated when you release a version that contains new features and improvements over the old version. The major number should be updated when the incremental improvements bring your program into a new level of maturity and stability. Some of your users may not want to follow every release that comes out, but they would like to upgrade when there's a significant amount of features to warrant such an upgrade. You should increase the major number when, in your judgement, you believe that it is time for these users to upgrade along with everyone else.

When beginning a new project, you should start counting your major number from 0, and your minor number from 1. Please exercise good judgement on when to increment your major number to 1. In general versions 0.x mean that the software is still under development and may not be stable. When you release version 1.0, you are telling people that your software has developed to the point that you recommend it for general use. In some cases, releasing version 2.0 means that your software has significantly matured from user feedback.

Sometimes, it is useful to use a third integer for writing version numbers for "unofficial" releases. This third integer is usually used with bleeding-edge prereleases of the software that contain the most recent bug fixes and features but are not as well tested and reviewed as the most recent official release. Possible version successions can look like:

 
1.0, 1.1, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.3, ...
Please use only two integers for official releases so that it is easy to distinguish them from prereleases.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2. Writing Good Programs

2.1 Why good code is important  
2.2 Choosing a good programming language  
2.3 Developing libraries  
2.4 Developing applications  
2.5 Free software is good software  
2.6 Invoking the `gpl' utility  
2.7 Inserting notices with Emacs  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1 Why good code is important

When you work on a software project, one of your short-term goals is to solve a problem at hand. If you are doing this because someone asked you to solve the problem, then all you need to do to look good in per eyes is to deliver a program that works. Nevetheless, regardless of how little person may appreciate this, doing just that is not good enough. Once you have code that gives the right answer to a specific set of problems, you will want to make improvements to it. As you make these improvements, you would like to have proof that your code's known reliability hasn't regressed. Also, tomorrow you will want to move on to a different set of related problems by repeating as little work as possible. Finally, one day you may want to pass the project on to someone else or recruit another developer to help you out with certain parts. You need to make it possible for the other person to get up to speed without reinventing your efforts. To accomplish these equally important goals you need to write good code.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.2 Choosing a good programming language

To write a good software, you must use the appropriate programming language and use it well. To make your software free, it should be possible to compile it with free tools on a free operating system. Therefore, you should avoid using programming languages that do not have a free compiler.

The C programming language is the native language of GNU, and the GNU coding standards encourage you to program in C. The main advantages of C are that it can be compiled with the system's native compiler, many people know C, and it is easy to learn. Nevertheless, C has weaknesses: it forces you to manually manage memory allocation, and any mistakes you might make can lead to very difficult bugs. Also C forces you to program at a low level. Sometimes it is desirable to program at a low level, but there are also cases where you want to build on a higher level.

For projects where you would like a higher-level compiled language, the recommended choice is to use C++. The GNU project distributes a free C++ compiler and nowadays most GNU systems that have a C compiler also have the free C++ compiler. The main advantage of C++ is that it will automatically manage dynamic memory allocation for you. C++ also has a lot of powerful features that allow you to program at a higher level than C, bringing you closer to the algorithms and the concepts involved, and making it easier to write robust programs. At the same time, C++ does not hide low-level details from you and you have the freedom to do the same low-level hacks that you had in C, if you choose to. In fact C++ is 99% backwards compatible with C and it is very easy to mix C and C++ code. Finally, C++ is an industry standard. As a result, it has been used to solve a variety of real-world problems and its specification has evolved for many years to make it a powerful and mature language that can tackle such problems effectively. The C++ specification was frozen and became an ANSI standard in 1998.

One of the disadvantages of C++ is that C++ object files compiled by different C++ compilers can not be linked together. In order to compile C++ to machine language, a lot of compilation issues need to be defered to the linking stage. Because object file formats are not traditionally sophisticated enough to handle these issues, C++ compilers do various ugly kludges. The problem is that different compilers do these kludges differently, making object files accross compilers incompatible. This is not a terrible problem, since object files are incompatible accross different platforms anyways. It is only a problem when you want to use more than one compiler on the same platform. Another disadvantage of C++ is that it is harder to interface a C++ library to another language, than it is to interface a C library. Finally not as many people know C++ as well as they know C, and C++ is a very extensive and difficult language to master. However these disadvantages must be weighted against the advantages. There is a price to using C++ but the price comes with a reward.

If you need a higher-level interpreted language, then the recommended choice is to use Guile. Guile is the GNU variant of Scheme, a LISP-like programming language. Guile is an interpreted language, and you can write full programs in Guile, or use the Guile interpreter interactively. Guile is compatible with the R4RS standard but provides a lot of GNU extensions. The GNU extensions are so extensive that it is possible to write entire applications in Guile. Most of the low-level facilities that are available in C, are also available in Guile.

What makes the Guile implementation of Scheme special is not the extensions themselves, but the fact that it it is very easy for any developer to add their own extensions to Guile, by implementing them in C. By combining C and Guile you leverage the advantages of both compiled and interpreted languages. Performance critical functionality can be implemented in C and higher-level software development can be done in Guile. Also, because Guile is interpreted, when you make your C code available through an extended Guile interpreter, then the user can also use the functionality of that code interactively through the interpreter.

The idea of extensible interpreted languages is not new. Other examples of extensible interpreted languages are Perl, Python and Tcl. What sets Guile apart from these languages is the elegance of Scheme. Scheme is the holy grail in the quest for a programming language that can be extended to support any programming paradigm by using the least amount of syntax. Scheme has natural support for both arbitrary precision integer arithmetic and floating point arithmetic. The simplicity of Scheme syntax, and the completeness of Guile, make it very easy to implement specialized scripting languages simply by translating them to Scheme. In Scheme algorithms and data are interchangable. As a result, it is easy to write Scheme programs that manipulate Scheme source code. This makes Scheme an ideal language for writing programs that manipulate algorithms instead of data, such as programs that do symbolic algebra. Because Scheme can manipulate its own source code, a Scheme program can save its state by writing Scheme source code into a file, and by parsing it later to load it back up again. This feature alone is one reason why engineers should use Guile to configure and drive numerical simulations.

Some people like to use Fortran 77. This is in many ways a good language for developing the computational core of scientific applications. We do have free compilers for Fortran 77, so using it does not restrict our freedom. (see section 8. Fortran with Autoconf) Also, Fortran 77 is an aggresively optimizable language, and this makes it very attractive to engineers that want to write code optimized for speed. Unfortunately, Fortran 77 can not do well anything except array-oriented numerical computations. Managing input/output is unnecessarily difficult with Fortran, and there's even computational areas, such as infinite precision integer arithmetic and symbolic computation that are not supported.

There are many variants of Fortran like Fortran 90, and HPF. Fortran 90 attempts, quite miserably, to make Fortran 77 more like C++. HPF allows engineers to write numerical code that runs on parallel computers. These variants should be avoided for two reasons:

  1. There are no free compilers for Fortran 90 or HPF. If you happen to use a proprietary operating system, you might as well make use of proprietary compilers if they generate highly optimized code and that is important to you. Nevertheless, in order for your software to be freed, it should be possible to compile it with free tools on a free operating system. Because it is possible to make parallel computers using GNU/Linux (see the Beowulf project), parallelized software can also be free. Therefore both Fortran 90 and HPF should be avoided.
  2. Another problem with these variants is that they are ad hoc languages that have been invented to enable Fortran to do things that it can not do by design. Eventually, when engineers will like to do things that Fortran 90 can't do either, it will be necessary to extend Fortran again, rewrite the compilers and produce yet another variant. What engineers need is a programming language that has the ability to self extend itself by writing software in the same programming language. The C++ programming language can do this without loss of performance. The departmentalization of disciplines in academia has made it very difficult for such a project to take off. Despite that, there is ongoing research in this area. (for example, see the Blitz++ project)
It is almost impossible to write good programs entirely in Fortran, so please use Fortran only for the numerical core of your application and do the bookeeping tasks, including input/output using a more appropriate language.

If you have written a program entirely in Fortran, please do not ask anyone else to maintain your code, unless person is like you and also knows only Fortran. If Fortran is the only language that you know, then please learn at least C and C++ and use Fortran only when necessary. Please do not hold the opinion that contributions in science and engineering are "true" contributions and software development is just a "tool". This bigotted attitude is behind the thousands of lines of ugly unmaintainable code that goes around in many places. Good software development can be an important contribution in its own right, and regardless of what your goals are, please appreciate it and encourage it. To maximize the benefits of good software, please make your software free. (FIXME: Crossreference copyright section in this chapter)


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.3 Developing libraries

The key to better code is to focus away from developing monolithic throw-away hacks that do only one job, and focus on developing libraries (FIXME: crossreference). Break down the original problem to parts, and the parts to smaller parts, until you get down to simple subproblems that can be easily tested, and from which you can construct solutions for both the original problem and future variants. Every library that you write is a legacy that you can share with other developers, that want to solve similar problems. Each library will allow these other developers to focus on their problem and not have to reinvent the parts that are common with your work from scratch. You should definetely make libraries out of subproblems that are likely to be broadly useful. Please be very liberal in what you consider "broadly useful". Please program in a defensive way that renders reusable as much code as possible, regardless of whether or not you plan to reuse it in the near future. The final application should merely have to assemble all the libraries together and make their functionality accessible to the user through a good interface.

It is very important for each of your libraries to have a complete test suite. The purpose of the test suite is to detect bugs in the library and to prove to you or convince you, the developer, that the library works. A test suite is composed of a collection of test programs that link with your libraries and experiment with the features provided by the library. These test programs should return with
 
exit(0);
if they do not detect anything wrong with the library and with
 
exit(1);
if they detect problems. The test programs should not be installed with the rest of the package. They are meant to be run after your software is compiled and before it is installed. Therefore, they should be written so that they can run using the compiled but uninstalled files of the library. Test programs should not output messages by default. They should run completely quietly and communicate with the environment in a yes or no fashion using the exit code. However, it is useful for test programs to output debugging information when they fail during development. Statements that output such information should be surrounded by conditional directives like this:
 
#if INSPECT_ERRORS
 printf("Division by zero: %d / %d\n",a,b);
#endif
This way it becomes easy to switch them on or off upon demand. The prefered way to manipulate a macro like this INSPECT_ERRORS is by adding a switch to your `configure' script. You can do this by adding the following lines to `configure.in':
 
AC_ARG_WITH(inspect,
  [  --with-inspect           Inspect test suite errors],
  [ AC_DEFINE(INSPECT_ERRORS, 1, "Inspect test suite errors")],
  [ AC_DEFINE(INSPECT_ERRORS, 0, "Inspect test suite errors")])
After the library is debugged, the debug statements should not be removed. If a future version of the library regresses and an old test begins to fail again, it will be useful to be able to reactivate the same error messages that were useful in debugging the test when it was first put together, and it may be necessary to add a few new ones.

The best time to write each test program is as soon as it is possible!. You should not be lazy, and you should not just keep throwing in code after code after code. The minute there is enough code in there to put together some kind of test program, just do it! When you write new code, it is easy to think that you are producing work with every new line of code that is written. The reality is that you know you have produced new work everytime you write working a test program for new features, and not a minute before. Another time when you should definetely write a test program is when you find a bug while ordinarily using the library. Then, write a test program that triggers the bug, fix the bug, and keep the test in your test suite. This way, if a future modification reintroduces the same bug it will be detected.

Please document your library as you go. The best time to update your documentation is immediately after you get new test programs checking out new futures. You might feel that you are too busy to write documentation, but the truth of the matter is that you will always be too busy. In fact, if you are a busy person, you are likely to have many other obligations bugging you around for your attention. There may be times that you have to stay away from a project for a large amount of time. If you have consistently been maintaining documentation, it will help you refocus on your project even after many months of absense.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.4 Developing applications

Applications are complete executable programs that can be run by the end-user. With library-oriented development the actual functionality is developed by writing libraries and debugged by developing test-suites for each library. With command-line oriented applications, the application source code parses the arguments that are passed to it by the user, and calls up the right functions in the library to carry out the user's requests. With GUI (1) applications, the application source code creates the widgets that compose the interface, binds them to actions, and then enters an event loop. Each action is implemented in terms of the functionality provided by the appropriate library.

It should be possible to implement applications by using relatively few application-specific source files, since most of the functionality is actually done in libraries. In some cases, the application is simple enough that it would be an overkill to package its functionality as a library. Nevertheless, in such cases please separate the source code that handles actual functionality from the source code that handles the user interface. Also, please always separate the code that handles input/output with the code that does actual computations. If these aspects of your source code are sufficiently separated then you make it easier for other people to reuse parts of your code in their applications. You also make it easier of yourself to switch to library-oriented development when your application grows and is no longer "simple enough".

Library-oriented development allows you to write good and robust applications. In return it requires discipline. Sometimes you may need to add experimental functionality that is not available through your libraries. The right thing to do is to extend the appropriate library. The easy thing to do is to implement it as part of your application-specific source code. If the feature is experimental and undergoing many changes, it may be best to go with the easy approach at first. Still, when the feature matures, please migrate it to the appropriate library, document it, and take it out of the application source code. What we mean by discipline is doing these migrations, when the time is right, despite pressures from "real life", such as deadlines, pointy-haired bosses, and nuclear terrorism. A rule of thumb for deciding when to migrate code to a library is when you find yourself cut-n-pasting chunks of code from application to application. If you do not do the right thing, your code will become increasingly harder to debug, harder to maintain, and less reliable.

Applications should also be documented, especially the ones that are command-line oriented. Application documentation should be thorough in explaining to the user all the things that he needs to know to use the application effectively and should be distributed separately from the application itself. Nevertheless, applications should recognize the --help switch and output a synopsis of how the application is used. Applications should also recognize the --version switch and state their version number. The easiest way to make applications understand these two switches is to use the GNU Argp library (FIXME: crossreference).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.5 Free software is good software

One of the reasons why you should write good code is because it allows you to make your code robust, reliable and most useful to your needs. Another reason is to make it useful to other people too, and make it easier for them to work with your code and reuse it for their own work. In order for this to be possible, you need to give worry about a few obnoxious legal issues.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.6 Invoking the `gpl' utility

Maintaining these legalese notices can be quite painful after some time. To ease the burden, Autotools distributes a utility called `gpl'. This utility will conveniently generate for you all the legal wording you will ever want to use. It is important to know that this application is not approved in any way by the Free Software Foundation. By this I mean that I haven't asked their opinion of it yet.

To create the file `COPYING' type:
 
% gpl -l COPYING
If you want to include a copy of the GPL in your documentation, you can generate a copy in texinfo format like this:
 
% gpl -lt gpl.texi
Also, every time you want to create a new file, use the `gpl' to generate the copyright notice. If you want it covered by the GPL use the standard notice. If you want to invoke the Guile-like permissions, then also use the library notice. If you want to grant unlimited permissions, meaning no copyleft, use the special notice. The `gpl' utility takes many different flags to take into account the different commenting conventions.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.7 Inserting notices with Emacs

If you are using GNU Emacs, then you can insert these copyright notices on-demand while you're editing your source code. Autotools bundles two Emacs packages: gpl and gpl-copying which provide you with equivalents of the `gpl' command that can be run under Emacs. These packages will be byte-compiled and installed automatically for you while installing Autotools.

To use these packages, in your `.emacs' you must declare your identity by adding the following commands:
 
(setq user-mail-address "me@here.com")
(setq user-full-name "My Name")
Then you must require the packages to be loaded:
 
(require 'gpl)
(require 'gpl-copying)
These packages introduce a set of Emacs commands all of which are prefixed as gpl-. To invoke any of these commands press M-x, type the name of the command and press enter.

The following commands will generate notices for your source code:

`gpl-c'
Insert the standard GPL copyright notice using C commenting.
`gpl-cL'
lnsert the standard GPL copyright notice using C commenting, followed by a Guile-like library exception. This notice is used by the Guile library. You may want to use it for libraries that you write that implement some type of a standard that you wish to encourage. You will be prompted for the name of your package.
`gpl-cc'
Insert the standard GPL copyright notice using C++ commenting.
`gpl-ccL'
Insert the standard GPL copyright notice using C++ commenting, followed by a Guile-like library exception. You will be prompted for the name of your package
`gpl-sh'
Insert the standard GPL copyright notice using shell commenting (i.e. has marks).
`gpl-shL'
Insert the standard GPL copyright notice using shell commenting, followed by a Guile-like library exception. This can be useful for source files, like Tcl files, which are executable code that gets linked in to form an executable, and which use hash marks for commenting.
`gpl-shS'
Insert the standard GPL notice using shell commenting, followed by the special Autoconf exception. This is useful for small shell scripts that are distributed as part of a build system.
`gpl-m4'
Insert the standard GPL copyright notice using m4 commenting (i.e. dnl) and the special Autoconf exception. This is the prefered notice for new Autoconf macros.
`gpl-el'
Insert the standard GPL copyright notice using Elisp commenting. This is useful for writing Emacs extension files in Elisp.
The following commands will generate notices for your source code:
`gpl-insert-copying-texinfo'
Insert a set of paragraphs very similar to the ones appearing at the Copying section of this manual. It is a good idea to include this notice in an unnumbered chapter titled "Copying" in the Texinfo documentation of your source code. You will be prompted for the title of your package. That title will substitute the word Autotools as it appears in the corresponding section in this manual.
`gpl-insert-license-texinfo'
Insert the full text of the GNU General Public License in Texinfo format. If your documentation is very extensive, it may be a good idea to include this notice either at the very beginning of your manual, or at the end. You should include the full license, if you plan to distribute the manual separately from the package as a printed book.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3. Using GNU Emacs

3.1 Introduction to Emacs  
3.2 Installing GNU Emacs  
3.3 Configuring GNU Emacs  
3.4 Using vi emulation  
3.5 Using Emacs as an IDE  
3.6 Inserting copyright notices with Emacs  
3.7 Using Emacs as an email client  
3.8 Handling patches  
3.9 Further reading on Emacs  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.1 Introduction to Emacs

Emacs is an environment for running Lisp programs that manipulate text interactively. Because Emacs is completely programmable, it can be used to implement not only editors, but a full integrated development environment for software development. Emacs can also browse info documentation, run email clients, a newsgroup reader, a sophisticated xterm, and an understanding psychotherapist.

Under the X window system, Emacs controls multiple x-windows called frames. Each frame has a menubar and the main editing area. The editing area is divided into windows with horizontal bars. You can grab these bars and move them around with the first mouse button. (2) Each window is bound to a buffer. A buffer is an Emacs data structure that contains text. Most editing commands operate on buffers, modifying their contents. When a buffer is bound to a window, then you can see its contents as they are being changed. It is possible for a buffer to be bound to two windows, on different frames or on the same frame. Then whenever a change is made to the buffer, it is reflected on both windows. It is not necessary for a buffer to be bound to a window, in order to operate on it. In a typical Emacs session you may be manipulating more buffers than the windows that you have on your screen.

A buffer can be visiting files. In that case, the contents of the buffer reflect the contents of a file that is being editted. But buffers can be associated with anything you like, so long as you program it up. For example, under the Dired directory editor, a buffer is bound to a directory, showing you the contents of the directory. When you press Enter while the cursor is over a file name, Emacs creates a new buffer, visits the file, and rebinds the window with that buffer. From the user's perspective, by pressing Enter he "opened" the file for editing. If the file has already been "opened" then Emacs simply rebinds the existing buffer for that file.

Emacs uses a variant of LISP, called Emacs LISP, as its programming language. Everytime you press a key, click the mouse, or select an entry from the menubar, an Emacs LISP function is evaluated. The mode of the buffer determines, among many other things, what function to evaluate. This way, every buffer can be associated with functionality that defines what you do in that buffer. For example you can program your buffer to edit text, to edit source code, to read news, and so on. You can also run LISP functions directly on the current buffer by typing M-x and the name of the function that you want to run. (3)

What is known as the "Emacs editor" is the default implementation of an editor under the Emacs system. If you prefer the vi editor, then you can instead run a vi clone, Viper (see section 3.4 Using vi emulation). The main reason why you should use Emacs, is not the particular editor, but the way Emacs integrates editing with all the other functions that you like to do as a software developer. For example:

All of these features make Emacs a very powerful, albeit unusual, integrated development environment. Many users of proprietary operating systems, like Lose95 (4), complain that GNU (and Unix) does not have an integrated development environment. As a matter of fact it does. All of the above features make Emacs a very powerful IDE.

Emacs has its own very extensive documentation (see section 3.9 Further reading on Emacs). In this manual we will only go over the fundamentals for using Emacs effectively as an integrated development environment.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.2 Installing GNU Emacs

If Emacs is not installed on your system, you will need to get a source code distribution and compile it yourself. Installing Emacs is not difficult. If Emacs is already installed on your GNU/Linux system, make sure that you do indeed have Emacs and not the Xemacs variant. Also, make sure that you have version 20.3 or newer. Finally, there are some variations in how Emacs can be installed. The installer can choose whether or not they want to install support for multiple languages and reading email over a POP server. It can be very useful to support both. If the preinstalled version does not support either, then uninstall it and reinstall Emacs from a source code distribution.

The emacs source code is distributed in three separate files:

`emacs-20.3.tar.gz'
This is the main Emacs distribution. If you do not care about international language support, you can install this by itself.
`leim-20.3.tar.gz'
This supplements the Emacs distribution with support for multiple languages. If you develop internationalized software, it is likely that you will need this.
`intlfonts-1.1.tar.gz'
This file contains the fonts that Emacs uses to support international languages. If you want international language support, you will definetely need this.
Get a copy of these files, place them under the same directory and unpack them with the following commands:
 
% gunzip emacs-20.3.tar.gz
% tar xf emacs-20.3.tar
% gunzip leim-20.3.tar.gz
% tar xf leim-20.3.tar
Both tarballs will unpack under the `emacs-20.3' directory. When this is finished, go in and compile the source code:
 
% cd emasc-20.3
% ./configure --with-pop
% make
This will take quite a while. When done, install Emacs with
 
# make install
To install `intlfonts-1.1.tar.gz' unpack it, and follow the instructions in the `README' file.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.3 Configuring GNU Emacs

To use Emacs effectively for software development you need to configure it. Part of the configuration needs to be done in your X-resources file. On a Debian GNU/Linux system, the X-resources can be configured by editing
 
/etc/X11/Xresources
In many systems, you can configure X-resources by editing a file called `.Xresources' or `.Xdefaults' on your home directory, but that is system-dependent. The configuration that I use on my system is:
 
! Emacs defaults
emacs*Background: Black
emacs*Foreground: White
emacs*pointerColor: White
emacs*cursorColor: White
emacs*bitmapIcon: on
emacs*font: fixed
emacs*geometry: 80x40
In general I favor dark backgrounds and `fixed' fonts. Dark backgrounds make it easier to sit in front of the monitor for a prolonged period of time. `fixed' fonts looks nice and it's small enough to make efficient use of your screenspace. Some people might prefer larger fonts however.

The bulk of Emacs configuration is done by editing or creating an `.emacs' file in your home directory. If you feel comfortable editing this file with the unconfigured Emacs editor, go for it. Alternatively, you can use the vanilla vi editor. (see section 3.4 Using vi emulation). Here are some things that you might want to add to your `.emacs' file:


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.4 Using vi emulation

Many people prefer to use the `vi' editor. The `vi' editor is the standard editor on Unix. It is also always available on GNU/Linux. Many system administrators find it necessary to use vi, especially when they are in the middle of setting up a system in which Emacs has not been installed yet. Besides that, there are many compelling reasons why people like vi.

Because most rearrangements of finger habits are not as optimal as the vi finger habits, most vi users react very unpleasently to other editors. For the benefit of these users, in this section we describe how to run a vi editor under the Emacs system. Similarly, users of other editors find the vi finger habits strange and unintuitive. For the benefit of these users we describe briefly how to use the vi editor, so they can try it out if they like.

The vi emulation package for the Emacs system is called Viper. To use Viper, add the following lines in your `.emacs': `.emacs':
 
(setq viper-mode t)
(setq viper-inhibit-startup-message 't)
(setq viper-expert-level '3)
(require 'viper)
We recommend expert level 3, as the most balanced blend of the vi editor with the Emacs system. Most editing modes are aware of Viper, and when you begin editing the text you are immediately thrown into Viper. Some modes however do not do that. In some modes, like the Dired mode, this is very appropriate. In other modes however, especially custom modes that you have added to your system, Viper does not know about them, so it does not configure them to enter Viper mode by default. To tell a mode to enter Viper by default, add a line like the following to your `.emacs' file:
 
(add-hook 'm4-mode 'viper-mode)
The modes that you are most likely to use during software development are
 
c-mode  , c++-mode , texinfo-mode
sh-mode , m4-mode  , makefile-mode
The Emacs distribution has a Viper manual. For more details on setting Viper up, you should read that manual.

The vi editor has these things called editing modes. An editing mode defines how the editor responds to your keystrokes. Vi has three editing modes: insert mode, replace mode and command mode. If you run Viper, there is also the Emacs mode. Emacs indicates which mode you are in by showing one of `<I>', `<R>', `<V>', `<E>' on the statusbar correspondingly for the Insert, Replace, Command and Emacs modes. Emacs also shows you the mode by the color of the cursor. This makes it easy for you to keep track of which mode you are in.

While you are in Command mode, you can prepend keystrokes with a number. Then the subsequent keystroke will be executed as many times as the number. We now list the most important keystrokes that are available to you, while you are in Viper's command mode: These are enough to get you started. Getting used to dealing with the modes and learning the commands is a matter of building finger habits. It may take you a week or two before you become comfortable with Viper. When Viper becomes second nature to you however, you won't want to tolerate what you used to use before.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.5 Using Emacs as an IDE

To use the extended Dired, which we recommend, add the following line to your `.emacs':
 
(add-hook 'dired-load-hook
   (function (lambda() (load "dired-x"))))


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6 Inserting copyright notices with Emacs


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.7 Using Emacs as an email client


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.8 Handling patches


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.9 Further reading on Emacs


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4. Compiling with Makefiles

4.1 Direct compilation  
4.2 Enter Makefiles  
4.3 Problems with Makefiles and workarounds  
4.4 Building libraries  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.1 Direct compilation

We begin at the beginning. If you recall, we showed to you that the hello world program can be compiled very simply with the following command:
 
% gcc hello.c -o hello
See section 1.3 Hello world example. Even in this simple case you have quite a few options:

Here are some variations of the above example:
 
% gcc -g -O3 hello.c hello
% gcc -g -Wall hello.c -o hello
% gcc -g -Wall -O3 hello.c -o hello
Compilers have many more flags like that, and some of these flags are compiler dependent.

Now let's consider the case where you have a much larger program. made of source files `foo1.c', `foo2.c', `foo3.c' and header files `header1.h' and `header2.h'. One way to compile the program is like this:
 
% gcc foo1.c foo2.c foo3.c -o foo
This is fine when you have only a few files to deal with. Eventually when you have more than a hundred files, this is very slow and inefficient, because everytime you change one of the `foo' files, all of them have to be recompiled. In large projects this can very well take a quite a few minutes, and in very large projects hours. The solution is to compile each part seperately and put them all together at the end, like this:
 
% gcc -c foo1.c
% gcc -c foo2.c
% gcc -c foo3.c
% gcc foo1.o foo2.o foo3.o -o foo
The first three lines compile the three parts seperately and generate output in the files `foo1.o', `foo2.o', `foo3.o'. The fourth line puts it all back together. This way if you make a change only in `foo1.o' you just do:
 
% gcc -c foo1.c
% gcc foo1.o foo2.o foo3.o -o foo
This feature of the compiler offers a way out, but it's hardly a solution.

The `make' utility was written to address these problems.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.2 Enter Makefiles

The `make' utility takes its instructions from a file called `Makefile' in the directory in which it was invoked. The `Makefile' involves four concepts: the target, the dependencies, the rules, and the source. Before we illustrate these concepts with examples we will explain them in abstract terms for those who are mathematically minded:

The `Makefile' is essentially a collection of logical statements about these four concepts. The content of each statement in English is:

To build this target, first make sure that these dependencies are up to date. If not build them first in the order in which they are listed. Then execute these rules to build this target.
Given a complete collection of such statements it is possible to infer what action needs to be taken to build a specific target, from the source files and the current state of the distribution. By action we mean passing commands to the shell. One reason why this is useful is because if part of the building process does not need to be repeated, it will not be repeated. The `make' program will detect that certain dependencies have not changed and skip the action required for rebuilding their targets. Another reason why this approach is useful is because it is intuitive in human terms. At least, it will be intuitive when we illustrate it to you.

In make-speak each statement has the following form:
 
target: dependency1 dependency2 ....
       shell-command-1
       shell-command-2
       shell-command-3
where target is the name of the target and dependency* the name of the dependencies, which can be either source files or other targets. The shell commands that follow are the commands that need to be passed to the shell to build the target after the dependencies have been built. To be compatible with most versions of make, you must seperate these statements with a blank line. Also, the shell-command* must be indented with the tab key. Don't forget your tab keys otherwise make will not work.

When you run make you can pass the target that you want to build as an argument. If you omit arguments and call make by itself then the first target mentioned in the Makefile is the one that gets built. The makefiles that Automake generates have the phony target all be the default target. That target will compile your code but not install it. They also provide a few more phony targets such as install, check, dist, distcheck, clean, distclean as we have discussed earlier. So Automake is saving you quite a lot of work because without it you would have to write a lot of repetitive code to provide all these phony targets.

To illustrate these concepts with an example suppose that you have this situation:

To build an executable `foo' you need to build object files and then link them together. We say that the executable depends on the object files and that each object file depends on a corresponding `*.c' file and the `*.h' files that it includes. Then to get to an executable `foo' you need to go through the following dependencies:
 
foo: foo1.o foo2.o foo3.o foo4.o
foo1.o: foo1.c gleep2.h gleep3.h
foo2.o: foo2.c gleep1.h
foo3.o: foo3.c gleep1.h gleep2.h
foo4.o: foo4.c gleep3.h
The thing on the left-hand-side is the target, the thing on the right-hand-side is the dependencies. The logic is that to build the thing on the left, you need to build the things on the right first. So, if `foo1.c' changes, `foo1.o' must be rebuilt. If `gleep3.h' changes then `foo1.o' and `foo4.o' must be rebuilt. That's the game.

The way the `Makefile' actually looks like is like this:
 
foo: foo1.o foo2.o foo3.o foo4.o
        gcc foo1.o foo2.o foo3.o foo4.o -o foo
 
foo1.o: foo1.c gleep2.h gleep3.h
        gcc -c foo1.c

foo2.o: foo2.c gleep1.h
        gcc -c foo2.c

foo3.o: foo3.c gleep1.h gleep2.h
        gcc -c foo3.c

foo4.o: foo4.c gleep3.h
        gcc -c foo4.c
It's the same thing as before except that we have supplemented the rules by which the target is built from the dependencies. Things to note about syntax:

If you omit the tabs or the blank line, then the Makefile will not work. Some versions of `make' have relaxed the blank line rule, since it's redundant, but to be portable, just put the damn blank line in.

You may ask, "how does `make' know what I changed?". It knows because UNIX keeps track of the exact date and time in which every file and directory was modified. This is called the Unix time-stamp. What happens then is that `make' checks whether any of the dependencies is newer than the main target. If so, then the target must be rebuilt. Cool. Now do the target's dependencies have to be rebuilt? Let's look at their dependencies and find out! In this recursive fashion, the logic is untangled and `make' does the Right Thing.

The `touch' command allows you to fake time-stamps and make a file look as if it has been just modified. This way you can force make to rebuild everything by saying something like:
 
% touch *.c *.h
If you are building more than one executable, then you may want to make a phony target all be the first target:
 
all: foo1 foo2 foo3
Then calling make will attempt to build all and that will cause make to loop over `foo1', `foo2', `foo3' and get them built. Of course you can also tell make to build these individually by typing:
 
% make foo1
% make foo2
% make foo3
Anything that is a target can be an argument. You might even say
 
% make bar.o
if all you want is to build a certain object file and then stop.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.3 Problems with Makefiles and workarounds

The main problem with maintaining Makefiles, in fact what we mean when we complain about maintaining Makefiles, is keeping track of the dependencies. The `make' utility will do its job if you tell it what the dependencies are, but it won't figure them out for you. There's a good reason for this of course, and herein lies the wisdom of Unix. To figure out the dependencies, you need to know something about the syntax of the files that you are working with!. And syntax is the turf of the compiler, and not `make'. The GNU compiler honors this responsibility and if you type:
 
% gcc -MM foo1.c
% gcc -MM foo2.c
% gcc -MM foo3.c
% gcc -MM foo4.c
it will compute the dependencies and put them out in standard output. Even so, it is clear that something else is needed to take advantage of this feature, if available, to generate a correct `Makefile' automatically. This is the main problem for which the only work-around is to use another tool that generates Makefiles.

The other big problem comes about with situations in which a software project spans many subdirectories. Each subdirectory needs to have a Makefile, and every Makefile must have a way to make sure that `make' gets called recursively to handle the subdirectories. This can be done, but it is quite cumbersome and annoying. Some programmers may choose to do without the advantages of a well-organized directory tree for this reason.

There are a few other little problems, but they have for most part solutions within the realm of the `make' utility. One such problem is that if you move to a system where the compiler is called `cc' instead of `gcc' you need to edit the Makefile everywhere. Here's a solution:
 
CC = gcc 

#CFLAGS = -Wall -g -O3
CFLAGS = -Wall -g

foo: foo1.o foo2.o foo3.o foo4.o
        $(CC) $(CFLAGS) foo1.o foo2.o foo3.o foo4.o -o foo

foo1.o: foo1.c gleep2.h gleep3.h
        $(CC) $(CFLAGS) -c foo1.c

foo2.o: foo2.c gleep1.h
        $(CC) $(CFLAGS) -c foo2.c

foo3.o: foo3.c gleep1.h gleep2.h
        $(CC) $(CFLAGS) -c foo3.c

foo4.o: foo4.c gleep3.h
        $(CC) $(CFLAGS) -c foo4.c
Now the user just has to modify the first line where he defines the macro-variable `CC', and whatever he puts there gets substituted in the rules bellow. The other macro variable, `CFLAGS' can be used to turn optimization on and off. Putting a `#' mark in the beginning of a line, makes the line a comment, and the line is ignored.

Another problem is that there is a lot of redundancy in this makefile. Every object file is built from the source file the same way. Clearly there should be a way to take advantage of that right? Here it is:
 
CC = gcc 
CFLAGS = -Wall -g

.SUFFIXES: .c .o 

.c.o:
        $(CC) $(CFLAGS) -c $<

.o:
        $(CC) $(CFLAGS) $< -o $@

foo: foo1.o foo2.o foo3.o foo4.o
foo1.o: foo1.c gleep2.h gleep3.h
foo2.o: foo2.c gleep1.h
foo3.o: foo3.c gleep1.h gleep2.h
foo4.o: foo4.c gleep3.h
Now this is more abstract, and has some cool punctuation. The `SUFFIXES' thing tells `make' that files that are possible targets, fall under three categories: files that end in `.c', files that end in `.o' and files that end in nothing. Now let's look at the next line:
 
.c.o:
        $(CC) $(CFLAGS) -c $<
This line is an abstract rule that tells `make' how to make `.o' files from `.c' files. The punctuation marks have the following meanings:

`$<'
are the dependencies that changed causing the target to need to be rebuilt
`$@'
is the target
`$^'
are all the dependencies for the current rule
In the same spirit, the next rule tells how to make the executable file from the `.o' files.
 
.o:
        $(CC) $(CFLAGS) $< -o $@
All that has to follow the abstract rules is the dependencies, without the specific rules! If you are using `gcc' these dependencies can be generated automatically and then you can include them from your Makefile. Unfortunately this approach doesn't work with all of the other compilers. And there is no standard way to include another file into Makefile source. (5) Of course, what we will point out eventually is that `automake' can take care of the dependencies for you.

The Makefile in our example can be enhanced in the following way:
 
CC = gcc
CFLAGS = -Wall -g
OBJECTS = foo1.o foo2.o foo3.o foo4.o
PREFIX = /usr/local

.SUFFIXES: .c .o

.c.o:
        $(CC) $(CFLAGS) -c $<

.o:
        $(CC) $(CFLAGS) $< -o $@

foo: $(OBJECTS)
foo1.o: foo1.c gleep2.h gleep3.h
foo2.o: foo2.c gleep1.h
foo3.o: foo3.c gleep1.h gleep2.h
foo4.o: foo4.c gleep3.h

clean:
        rm -f $(OBJECTS)

distclean:
        rm -f $(OBJECTS) foo

install:
        rm -f $(PREFIX)/bin/foo
        cp foo $(PREFIX)/bin/foo
We've added three fake targets called `clean' and `distclean', `install' and introduced a few more macro-variables to control redundancy. I am sure some bells are ringing now. When you type:
 
% make 
the first target (which is `foo') gets build, and your program compiles. When you type
 
% make install
since there is no file called `install' anywhere, the rule there is executed which has the effect of copying the executable over at `/usr/local/bin'. To get rid of the object files,
 
% make clean
and to get rid of the executable as well
 
% make distclean
Such fake targets are called phony targets in makefile parlance. As you can see, the `make' utility is quite powerful and there's a lot it can do. If you want to become a `make' wizard, all you need to do is read the GNU Make Manual and waste a lot of time spiffying up your makefiles, instead of getting your programs debugged, The GNU Make manual is extremely well written, and will make for enjoyable reading. It is also free, unlike "published" books.

The reason we went to the trouble to explain `make' is because it is important to understand what happens behind the hood, and because in many cases, `make' is a fine thing to use. It works for simple programs. And it works for many other things such as formatting TeX documents and so on.

As we evolve to more and more complicated projects, there's two things that we need. A more high-level way of specifying what you want to build, and a way of automatically determining the values that you want to put to things like CFLAGS, PREFIX and so on. The first thing is what `automake' does, the second thing is what `autoconf' does.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.4 Building libraries

There's one last thing that we need to mention before moving on, and that's libraries. As you recall, to put together an executable, we make a whole bunch of `.o' files and then put them all together. It just so happens in many cases that a set of `.o' files together forms a cohesive unit that can be reused in many applications, and you'd like to use them in other programs. To make things simpler, what you do is put the `.o' files together and make a library.

A library is usually composed of many `.c' files and hopefully only one or at most two `.h' files. It's a good practice to minimize the use of header files and put all your gunk in one header file, because this way the user of your library won't have to be typing an endless stream of `#include' directives for every `.c' file he writes that depends on the library. Be considerate. The user might be you! Header files fall under two categories: public and private. The public header files must be installed at `/prefix/include' whereas the private ones are only meant to be used internally. The public header files export documented library features to the user. The private header files export undocumented library features that are to be used only by the developer of the library and only for the purpose of developing the library.

Suppose that we have a library called `barf' that's made of the following files:

`barf.h', `barf1.c', `barf2.c', `barf3.c'
In real life, the names should be more meaningful than that, but we're being general here. To build it, you first make the `.o' files:
 
% gcc -c barf1.c
% gcc -c barf2.c
% gcc -c barf3.c
and then you do this magic:
 
% rm -f libbarf.a
% ar cru libbarf.a barf1.o barf2.o barf3.o
This will create a file libbarf.a from the object files `barf1.o', `barf2.o', `barf3.p'. On most Unix systems, the library won't work unless it's "blessed" by a program called `ranlib':
 
% ranlib libbarf.a
On other Unix systems, you might find that `ranlib' doesn't even exist because it's not needed.

The reason for this is historical. Originally ar was meant to be used merely for packaging files together. The more well known program tar is a descendent of ar that was designed to handle making such archives on a tape device. Now that tape devices are more or less obsolete, tar is playing the role that was originally meant for ar. As for ar, way back, some people thought to use it to package *.o files. However the linker wanted a symbol table to be passed along with the archive for the convenience of the people writing the code for the linker. Perhaps also for efficiency. So the ranlib program was written to generate that table and add it to the *.a file. Then some Unix vendors thought that if they incorporated ranlib to ar then users wouldn't have to worry about forgetting to call ranlib. So they provided ranlib but it did nothing. Some of the more evil ones dropped it all-together breaking many people's makefiles that tried to run ranlib. In the next chapter we will show you that Autoconf and Automake will automatically determine for you how to deal with ranlib in a portable manner.

Anyway, once you have a library, you put the header file `barf.h' under `/usr/local/include' and the `libbarf.a' file under `/usr/local/lib'. If you are in development phase, you put them somewhere else, under a prefix different other than `/usr/local'.

Now, how do we use libraries? Well, suppose that a program uses the barf function defined in the barf library. Then a typical program might look like:
 
// -* main.c *-
#include <stdio.h>
#include <barf.h>
main()
{
 printf("This is barf!\n");
 barf();
 printf("Barf me!\n");
}
If the library was installed in `/usr/local' then you can compile like this:
 
% gcc -c main.c
% gcc main.o -o main -lbarf
Of course, if you did not install in `/prefix' instead of `/usr/local' or `/usr' then you are in trouble. Now you have to do it this way:
 
% gcc -I/prefix/include -c main.c
% gcc main.o -o main -L/prefix/lib -lbarf
The `-I' flag tells the compiler where to find any extra header files (like `barf.h') and the `-L' flag tells the compiler where to find any extra libraries (like `libbarf.a'). The `-lbarf' flag tells the compiler to bring in the entire `libbarf.a' library with all its enclosed `.o' files and link it in with whathaveyou to produce the executable.

If the library hasn't been installed yet, and is present in the same directory as the object file `main.o' then you can link them by passing its filename instead:
 
% gcc main.o libbarf.a -o main
Please link libraries with their full names if they haven't yet been installed under the prefix directory and reserve using the -l flag only for libraries that have already been installed. This is very important. When you use Automake it helps it keep the dependencies straight. And when you use shared libraries, it is absolutely essential.

Also, please pay attention to the order with which you link your libraries. When the linker links a library, it does not embed into the executable code the entire library, but only the symbols that are needed from the library. In order for the linker to know what symbols are really needed from any given library, it must have already parsed all the other libraries and object files that depend on that library! This implies that you first link your object files, then you link the higher-level libraries, then the lower-level libraries. If you are the author of the libraries, you must write your libraries in such a manner, that the dependency graph of your libraries is a tree. If two libraries depend on each other bidirectionally, then you may have trouble linking them in. This suggests that they should be one library instead!

While we are at the topic, when you compile ordinary programs like the hello world program what really goes on behind the scenes is this:
 
% gcc -c hello.c
% gcc -o hello hello.o -lc
This links in the C system library `libc.a'. The standard include files that you use, such as `stdio.h', `stdlib.h' and whathaveyou are all refering to various parts of these libraries. These libraries get linked in by default when the `-o' flag is present. Note that other C compilers may be calling their system libraries something else. For this reason the corresponding flags are assumed and you don't have to supply them.

The catch is that there are many functions that you think of as standard that are not included in the `libc.a' library. For example all the math functions that are declared in `math.h' are defined in a library called `libm.a' which is not linked by default. So if the hello world program needed the math library you should be doing this instead:
 
% gcc -c hello.c
% gcc -o hello hello.o -lm
On some old Linux systems it used to be required that you also link a `libieee.a' library:
 
% gcc -o hello hello.o -lieee -lm
More problems of this sort occur when you use more esoteric system calls like sockets. Some systems require you to link in additional system libraries such as `libbsd.a', `libsocket.a', `libnsl.a'. Also if you are linking Fortran and C code together you must also link the Fortran run-time libraries. These libraries have non-standard names and depend on the Fortran compiler you use. Finally, a very common problem is encountered when you are writing X applications. The X libraries and header files like to be placed in non-standard locations so you must provide system-dependent -I and -L flags so that the compiler can find them. Also the most recent version of X requires you to link in some additional libraries on top of libX11.a and some rare systems require you to link some additional system libraries to access networking features (recall that X is built on top of the sockets interface and it is essentially a communications protocol between the computer running the program and computer that controls the screen in which the X program is displayed.) Fortunately, Autoconf can help you deal with all of this. We will cover these issues in more detail in subsequent chapters.

Because it is necessary to link system libraries to form an executable, under copyright law, the executable is derived work from the system libraries. This means that you must pay attention to the license terms of these libraries. The GNU `libc' library is under the LGPL license which allows you to link and distribute both free and proprietary executables. The `stdc++' library is also under terms that permit the distribution of proprietary executables. The `libg++' library however only permits you to build free executables. If you are on a GNU system, including Linux-based GNU systems, the legalese is pretty straightforward. If you are on a proprietary Unix system, you need to be more careful. The GNU GPL does not allow GPLed code to be linked against proprietary library. Because on Unix systems, the system libraries are proprietary, their terms may not allow you to distribute executables derived from them. In practice, they do however, since proprietary Unix systems do want to attract proprietary applications. In the same spirit, the GNU GPL also makes an exception and explicitly permits the linking of GPL code with proprietary system libraries, provided that said libraries are system libraries. This includes proprietary `libc.a' libraries, the `libdxml.a' library in Digital Unix, proprietary Fortran system libraries like `libUfor.a', and the X11 libraries.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5. Using Automake and Autoconf

5.1 Hello World revisited  
5.2 OLD Using configuration headers  
5.3 The building process  
5.4 Some general advice  
5.5 Standard organization with Automake  
5.6 Programs and Libraries with Automake  
5.7 General Automake principles  
5.8 Simple Automake examples  
5.9 Built sources  
5.10 Installation directories.  
5.11 Handling shell scripts  
5.12 Handling other obscurities  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.1 Hello World revisited

To begin, let's review the simplest example, the hello world program:

`hello.c'
 
#include <stdio.h>
main()
{
 printf("Howdy, world!\n");
}
`Makefile.am'
 
bin_PROGRAMS = hello
hello_SOURCES = hello.c
`configure.in'
 
AC_INIT(hello.cc)
AM_INIT_AUTOMAKE(hello,1.0)
AC_PROG_CC
AC_PROG_INSTALL
AC_OUTPUT(Makefile)

The language of `Makefile.am' is a logic language. There is no explicit statement of execution. Only a statement of relations from which execution is inferred. On the other hand, the language of `configure.in' is procedural. Each line of `configure.in' is a command that is executed.

Seen in this light, here's what the `configure.in' commands shown do:

The `Makefile.am' is more obvious. The first line specifies the name of the program we are building. The second line specifies the source files that compose the program.

For now, as far as `configure.in' is concerned you need to know the following additional facts:

As we explained before to build this package you need to execute the following commands:
 
% aclocal
% autoconf
% touch README AUTHORS NEWS ChangeLog
% automake -a 
% configure
% make
The first three commands, are for the maintainer only. When the user unpacks a distribution, he should be able to start from `configure' and move on.

If you are curious you can take a look at the generated `Makefile'. It looks like gorilla spit but it will give you an idea of how one gets there from the `Makefile.am'.

The `configure' script is an information gatherer. It finds out things about your system. That information is given to you in two ways. One way is through defining C preprocessor macros that you can test for directly in your source code with preprocessor directives. This is done by passing -D flags to the compiler. The other way is by making certain variables defined at the `Makefile.am' level. This way you can, for example, have the configure script find out how a certain library is linked, export is as a `Makefile.am' variable and use that variable in your `Makefile.am'. Also, through certain special variables, `configure' can control how the compiler is invoked by the `Makefile'.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.2 OLD Using configuration headers

As you may have noticed, the `configure' script in the previous example defines two preprocessor macros that you can use in your code: PACKAGE and VERSION. As you become a power-user of `autoconf' you will get define even more such macros. If you inspect the output of `make' during compilation, you will see that these macros get defined by passing `-D' flags to the compiler, one for each macro. When there is too many of these flags getting passed around, this can cause two problems: it can make the `make' output hard to read, and more importantly it can hit the buffer limits of various braindead implementations of `make'. To work around this problem, an alternative approach is to define all these macros in a special header file and include it in all the sources.

A hello world program using this technique looks like this

`configure.in'
 
AC_INIT
AM_CONFIG_HEADER(config.h)
AM_INIT_AUTOMAKE(hello,0.1)
AC_PROG_CXX
AC_PROG_INSTALL
AC_OUTPUT(Makefile)
`Makefile.am'
 
bin_PROGRAMS = hello
hello_SOURCES = hello.c
`hello.c'
 
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif

#include <stdio.h>
main()
{
 printf("Howdy, pardner!\n");
}
Note that we call a new macro in `configure.in': AM_CONFIG_HEADER. Also we include the configuration file conditionally with the following three lines:
 
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif
It is important to make sure that the `config.h' file is the first thing that gets included. Now do the usual routine:
 
% aclocal
% autoconf
% touch NEWS README AUTHORS ChangeLog
% automake -a
Automake will give you an error message saying that it needs a file called `config.h.in'. You can generate such a file with the `autoheader' program. So run:
 
% autoheader
Symbol `PACKAGE' is not covered by acconfig.h
Symbol `VERSION' is not covered by acconfig.h
Again, you get error messages. The problem is that autoheader is bundled with the autoconf distribution, not the automake distribution, and consequently doesn't know how to deal with the PACKAGE and VERSION macros. Of course, if `configure' defines a macro, there's nothing to know. On the other hand, when a macro is not defined then there are at least two possible defaults:
 
#undef PACKAGE
#define PACKAGE 0
The autoheader program here complains that it doesn't know the defaults for the PACKAGE and VERSION macros. To provide the defaults, create a new file `acconfig.h':
`acconfig.h'
 
#undef PACKAGE
#undef VERSION
and run `autoheader' again:
 
% autoheader
At this point you must run autoconf again, so that it takes into account the presense of acconfig.h:
 
% aclocal
% autoconf
Now you can go ahead and build the program:
 
% configure
% make
Computing dependencies for hello.cc...
echo > .deps/.P
gcc -DHAVE_CONFIG_H -I. -I. -I.   -g -O2 -c hello.cc
gcc -g -O2  -o hello  hello.o  
Note that now instead of multiple -D flags, there is only one such flag passed: -DHAVE_CONFIG_H. Also, appropriate -I flags are passed to make sure that `hello.cc' can find and include `config.h'. To test the distribution, type:
 
% make distcheck
......
========================
hello-0.1.tar.gz is ready for distribution
========================
and it should all work out.

The `config.h' files go a long way back in history. In the past, there used to be packages where you would have to manually edit `config.h' files and adjust the macros you wanted defined by hand. This made these packages very difficult to install because they required intimate knowledge of your operating system. For example, it was not unusual to see a comment saying "if your system has a broken vfork, then define this macro". How the hell are you supposed to know if your systems vfork is broken?? With auto-configuring packages all of these details are taken care of automatically, shifting the burden from the user to the developer where it belongs.

Normally in the `acconfig.h' file you put statements like
 
#undef MACRO
#define MACRO default
These values are copied over to `config.h.in' and are supplemented with additional defaults for C preprocessor macros that get defined by native autoconf macros like AC_CHECK_HEADERS, AC_CHECK_FUNCS, AC_CHECK_SIZEOF, AC_CHECK_LIB.

If the file `acconfig.h' contains the string @TOP@ then all the lines before the string will be included verbatim to `config.h' before the custom definitions. Also, if the file `acconfig.h' contains the string @BOTTOM@ then all the lines after the string will be included verbatim to `config.h' after the custom definitions. This allows you to include further preprocessor directives that are related to configuration. Some of these directives may be using the custom definitions to conditionally issue further preprocessor directives. Due to a bug in some versions of autoheader if the strings @TOP@ and @BOTTOM@ do appear in your acconfig.h file, then you must make sure that there is at least one line appearing before @TOP@ and one line after @BOTTOM@, even if it has to be a comment. Otherwise, autoheader may not work correctly.

With `autotools' we distribute a utility called `acconfig' which will build `acconfig.h' automatically. By default it will always make sure that
 
#undef PACKAGE
#undef VERSION
are there. Additionally, if you install macros that are `acconfig' friendly then `acconfig' will also install entries for these macros. The acconfig program may be revised in the future and perhaps it might be eliminated. There is an unofficial patch to Autoconf that will automate the maintance of `acconfig.h', eliminating the need for a seperate program. I am not yet certain if that patch will be part of the official next version of Autoconf, but I very much expect it to. Until then, if you are interested, see: http://www.clark.net/pub/dickey/autoconf/autoconf.html This situation creates a bit of a dilemma about whether I should document and encourage acconfig in this tutorial or not. I believe that the Autoconf patch is a superior solution. However since I am not the one maintaining Autoconf, my hands are tied. For now let's say that if you confine yourself to using only the macros provided by autoconf, automake, and autotools then `acconfig.h' will be completely taken care for you by `acconfig'. In the future, I hope that acconfig.h will be generated by configure and be the sole responsibility of Autoconf.

You may be wondering whether it is worth using `config.h' files in the programs you develop if there aren't all that many macros being defined. My personal recommendation is yes. Use `config.h' files because perhaps in the future your `configure' might need to define even more macros. So get started on the right foot from the beginning. Also, it is nice to just have a config.h file lying around because you can have all your configuration specific C preprocessor directives in one place. In fact, if you are one of these people writing peculiar system software where you get to #include 20 header files on every single source file you write, you can just have them on all thrown into config.h once and for all. In the next chapter we will tell you about the LF macros that get distributed with autotools and this tutorial. These macros do require you to use the `config.h' file. The bottom line is: `config.h' is your friend; trust the config.h.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3 The building process

FIXME: write about VPATH builds and how to modify optimization


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.4 Some general advice

In software engineering, people start from a precise, well-designed specification and proceed to implementation. In research, the specification is fluid and immaterial and the goal is to be able to solve a slightly different problem every day. To have the flexibility to go from variation to variation with the least amount of fuss is the name of the game. By fuss, we refer to debugging, testing and validation. Once you have a code that you know gives the right answer to a specific set of problems, you want to be able to move on to a different set of similar problems with reinventing, debugging and testing as little as possible. These are the two distinct situations that computer programmers get to confront in their lives.

Software engineers can take good care of themselves in both situations. It's part of their training. However, people whose specialty is the scientific problem and not software engineering, must confront the hardest of the two cases, the second one, with very little training in software engineering. As a result they develop code that's clumsy in implementation, clumsy in usage, and with only redeeming quality the fact that it gives the right answer. This way, they do get the work of the day done, but they leave behind them no legacy to do the work of tomorrow. No general-purpose tools, no documentation, no reusable code.

The key to better software engineering is to focus away from developing monolithic applications that do only one job, and focus on developing libraries. One way to think of libraries is as a program with multiple entry points. Every library you write becomes a legacy that you can pass on to other developers. Just like in mathematics you develop little theorems and use the little theorems to hide the complexity in proving bigger theorems, in software engineering you develop libraries to take care of low-level details once and for all so that they are out of the way everytime you make a different implementation for a variation of the problem.

On a higher level you still don't create just one application. You create many little applications that work together. The centralized all-in-one approach in my experience is far less flexible than the decentralized approach in which a set of applications work together as a team to accomplish the goal. In fact this is the fundamental principle behind the design of the Unix operating system. Of course, it is still important to glue together the various components to do the job. This you can do either with scripting or with actually building a suite of specialized monolithic applications derived from the underlying tools.

The name of the game is like this: Break down the program to parts. And the parts to smaller parts, until you get down to simple subproblems that can be easily tested, and from which you can construct variations of the original problem. Implement each one of these as a library, write test code for each library and make sure that the library works. It is very important for your library to have a complete test suite, a collection of programs that are supposed to run silently and return normally (exit(0);) if they execute successfully, and return abnormally (assert(false); exit(1);) if they fail. The purpose of the test suite is to detect bugs in the library, and to convince you, the developer, that the library works. The best time to write a test program is as soon as it is possible! Don't be lazy. Don't just keep throwing in code after code after code. The minute there is enough new code in there to put together some kind of test program, just do it! I can not emphasize that enough. When you write new code you have the illusion that you are producing work, only to find out tomorrow that you need an entire week to debug it. As a rule, internalize the reality that you know you have produced new work everytime you write a working test program for the new features, and not a minute before. Another time when you should definetly write a test suite is when you find a bug while ordinarily using the library. Then, before you even fix the bug, write a test program that detects the bug. Then go fix it. This way, as you add new features to your libraries you have insurance that they won't reawaken old bugs.

Please keep documentation up to date as you go. The best time to write documentation is right after you get a few new test programs working. You might feel that you are too busy to write documentation, but the truth of the matter is that you will always be too busy. After long hours debugging these seg faults, think of it as a celebration of triumph to fire up the editor and document your brand-spanking new cool features.

Please make sure that computational code is completely seperated from I/O code so that someone else can reuse your computational code without being forced to also follow your I/O model. Then write programs that invoke your collection of libraries to solve various problems. By dividing and conquering the problem library by library with a test suite for each step along the way, you can write good and robust code. Also, if you are developing numerical software, please don't expect that other users of your code will be getting a high while entering data for your input files. Instead write an interactive utility that will allow users to configure input files in a user friendly way. Granted, this is too much work in Fortran. Then again, you do know more powerful languages, don't you?

Examples of useful libraries are things like linear algebra libraries, general ODE solvers, interpolation algorithms, and so on. As a result you end up with two packages. A package of libraries complete with a test suite, and a package of applications that invoke the libraries. The package of libraries is well-tested code that can be passed down to future developers. It is code that won't have to be rewritten if it's treated with respect. The package of applications is something that each developer will probably rewrite since different people will probably want to solve different problems. The effect of having a package of libraries is that C++ is elevated to a Very High Level Language that's closer to the problems you are solving. In fact a good rule of thumb is to make the libraries sufficiently sophisticated so that each executable that you produce can be expressed in one source file. All this may sound like common sense, but you will be surprised at how many scientific developers maintain just one does-everything-program that they perpetually hack until it becomes impossible to maintain. And then you will be even more surprised when you find that some professors don't understand why a "simple mathematical modification" of someone else's code is taking you so long.

Every library must have its own directory and Makefile. So a library package will have many subdirectories, each directory being one library. And perhaps if you have too many of them, you might want to group them even further down. Then, there's the applications. If you've done everything right, there should be enough stuff in your libraries to enable you to have one source file per application. Which means that all the source files can probably go down under the same directory.

Very often you will come to a situation where there's something that your libraries to-date can't do, so you implement it and stick it along in your source file for the application. If you find yourself cut and pasting that implementation to other source files, then this means that you have to put this in a library somewhere. And if it doesn't belong to any library you've written so far, maybe to a new library. When you are in a deadline crunch, there's a tendency not to do this since it's easier to cut and paste. The problem is that if you don't take action right then, eventually your code will degenerate to a hard-to-use mess. Keeping the entropy down is something that must be done on a daily basis.

Finally, a word about the age-old issue of language-choice. The GNU coding standards encourage you to program in C and avoid using languages other than C, such as C++ or Fortran. The main advantage of C over C++ and Fortran is that it produces object files that can be linked by any C or C++ compiler. In contrast, C++ object files can only be linked by the compiler that produced them. As for Fortran, aside from the fact that Fortran 90 and 95 have no free compilers, it is not very trivial to mix Fortran 77 with C/C++, so it makes no sense to invite all that trouble without a compelling reason. Nevertheless, my suggestion is to code in C++. The main benefit you get with C++ is robustness. Having constructors and destructors and references can go a long way towayrds helping you to void memory errors, if you know how to make them work for you.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.5 Standard organization with Automake

Now we get into the gory details of software organization. I'll tell you one way to do it. This is advice, not divine will. It's simply a way that works well in general, and a way that works well with autoconf and automake in particular.

The first principle is to maintain the package of libraries seperate from the package of applications. This is not an iron-clad rule. In software engineering, where you have a crystal clear specification, it makes no sense to keep these two seperate. I found from experience that it makes a lot more sense in research. Either of these two packages must have a toplevel directory under which live all of its guts. Now what do the guts look like?

First of all you have the traditional set of information files that we described in Chapter 1:
 
README, AUTHORS, NEWS, ChangeLog, INSTALL, COPYING
You also have the following subdirectories:

`m4'
Here, you install any new `m4' files that your package may want to install. These files define new `autoconf' commands that you may want to make available to other developers who want to use your libraries.
`doc'
Here you put the documentation for your code. You have the creative freedom to present the documentation in any way you desire. However, the prefered way to document software is to use Texinfo. Texinfo has the advantage that you can produce both on-line help as well as a nice printed book from the same source. We will say something about Texinfo later.
`src'
Here's the source code. You could put it at the toplevel directory as many developers do, but I find it more convenient to keep it away in a subdirectory. Automake makes it trivially easy to do recursive `make', so there is no reason not to take advantage of it to keep your files more organized.
`include'
This is an optional directory for distributions that use many libraries. You can have the configure script link all public header files in all the subdirectories under src to this directory. This way it will only be necessary to pass one -I flag to test suites that want to access the include files of other libraries in the distribution. We will discuss this later.
`lib'
This is an optional directory where you put portability-related source code. This is mainly replacement implementations for system calls that may not exist on some systems. You can also put tools here that you commonly use accross many different packages, tools that are too simple to just make libraries out of every each one of them. It is suggested that you maintain these tools in a central place. We will discuss this much later.
Together with these subdirectories you need to put a `Makefile.am' and a `configure.in' file. I also suggest that you put a shell script, which you can call `reconf', that contains the following:
 
#!/bin/sh
rm -f config.cache
rm -f acconfig.h
touch acconfig.h
aclocal -I m4
autoconf
autoheader
acconfig
automake -a
exit
This will generate `configure' and `Makefile.in' and needs to be called whenever you change a `Makefile.am' or a `configure.in' as well as when you change something under the `m4' directory. It will also call acconfig which automatically generates acconfig.h and calle `autoheader' to make config.h.in. The `acconfig' utility is part of `autotools', and if you are maintaining `acconfig.h' by hand, then you want to use this script instead:
 
#!/bin/sh
rm -f config.cache
aclocal -I m4
autoconf
autoheader
automake -a
exit
At the toplevel directory, you need to put a `Makefile.am' that will tell the computer that all the source code is under the `src' directory. The way to do it is to put the following lines in `Makefile.am':
 
EXTRA_DIST = reconf
SUBDIRS = m4 doc src

If you are also using a `lib' subdirectory, then it should be built before `src':
 
EXTRA_DIST = reconf
SUBDIRS = m4 doc lib src
The `lib' subdirectory should build a static library that is linked by your executables in `src'. There should be no need to install that library.

At the toplevel directory you also need to put the `configure.in' file. That should look like this:
 
AC_INIT
AM_INIT_AUTOMAKE(packagename,versionnumber)
[...put your tests here...]
AC_OUTPUT(Makefile                   \
          doc/Makefile               \
          m4/Makefile                \
          src/Makefile               \
          src/dir1/Makefile          \
          src/dir2/Makefile          \
          src/dir3/Makefile          \
          src/dir1/foo1/Makefile     \
          ............               \
         )

You will not need another `configure.in' file. However, every directory level on your tree must have a `Makefile.am'. When you call automake on the top-level directory, it looks at `AC_OUTPUT' at your `configure.in' to decide what other directories have a `Makefile.am' that needs parsing. As you can see from above, a `Makefile.am' file is needed even under the `doc' and `m4' directories. How to set that up is up to you. If you aren't building anything, but just have files and directories hanging around, you must declare these files and directories in the `Makefile.am' like this:
 
SUBDIRS = dir1 dir2 dir3
EXTRA_DIST = file1 file2 file3
Doing that will cause make dist to include these files and directories to the package distribution.

This tedious setup work needs to be done everytime that you create a new package. If you create enough packages to get sick of it, then you want to look into the `acmkdir' utility that is distributed by Autotools. We will describe it at the next chapter.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.6 Programs and Libraries with Automake

Next we explain how to develop `Makefile.am' files for the source code directory levels. A `Makefile.am' is a set of assignments. These assignments imply the Makefile, a set of targets, dependencies and rules, and the Makefile implies the execution of building.

The first set of assignments going at the beginning look like this:
 
INCLUDES = -I/dir1 -I/dir2 -I/dir3 ....
LDFLAGS = -L/dir1 -L/dir2 -L/dir3 .... 
LDADD = -llib1 -llib2 -llib3 ...

If your package contains subdirectories with libraries and you want to link these libraries in another subdirectory you need to put `-I' and `-L' flags in the two variables above. To express the path to these other subdirectories, use the `$(top_srcdir)' variable. For example if you want to access a library under `src/libfoo' you can put something like:
 
INCLUDES = ... -I$(top_srcdir)/src/libfoo ...
LDFLAGS  = ... -L$(top_srcdir)/src/libfoo ...
on the `Makefile.am' of every directory level that wants access to these libraries. Also, you must make sure that the libraries are built before the directory level is built. To guarantee that, list the library directories in `SUBDIRS' before the directory levels that depend on it. One way to do this is to put all the library directories under a `lib' directory and all the executable directories under a `bin' directory and on the `Makefile.am' for the directory level that contains `lib' and `bin' list them as:
 
SUBDIRS = lib bin
This will guarantee that all the libraries are available before building any executables. Alternatively, you can simply order your directories in such a way so that the library directories are built first.

Next we list the things that are to be built in this directory level:
 
bin_PROGRAMS    = prog1 prog2 prog3 ....
lib_LIBRARIES   = libfoo1.a libfoo2.a libfoo3.a ....
check_PROGRAMS  = test1 test2 test3 ....
TESTS           = $(check_PROGRAMS)
include_HEADERS = header1.h header2.h ....

It is good programming practice to keep libraries and executables under seperate directory levels. However, it is okey to keep the library and the check executables that test the library under the same directory level because that makes it easier for you to link them with the library.

For each of these types of targets, we must state information that will allow automake and make to infer the building process.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.7 General Automake principles

In the previous section we described how to use Automake to compile programs, libraries and test suites. To exploit the full power of Automake however, it is important to understand the fundamental ideas behind it.

The simplest way to look at a `Makefile.am' is as a collection of assignments which infer a set of Makefile rules, which in turn infer the building process. There are three types of such assignments:

In addition to all this, you may include ordinary targets in a `Makefile.am' just as you would in an ordinary `Makefile.in'. If you do that however, then please check at some point that your distribution can properly build with `make distcheck'. It is very important that when you define your own rules, to build whatever you want to build, to follow the following guidelines:


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.8 Simple Automake examples

A real life example of a `Makefile.am' for libraries is the one I use to build the Blas-1 library. It looks like this:

* `blas1/Makefile.am'
 
SUFFIXES = .f
.f.o:
       $(F77) $(FFLAGS) -c $<

lib_LIBRARIES = libblas1.a
libblas1_a_SOURCES = f2c.h caxpy.f ccopy.f cdotc.f cdotu.f crotg.f cscal.f \
 csrot.f csscal.f cswap.f dasum.f daxpy.f dcabs1.f dcopy.f ddot.f dnrm2.f \
 drot.f drotg.f drotm.f drotmg.f dscal.f dswap.f dzasum.f dznrm2.f icamax.f \
 idamax.f isamax.f izamax.f sasum.f saxpy.f scasum.f scnrm2.f scopy.f \ 
 sdot.f snrm2.f srot.f srotg.f srotm.f srotmg.f sscal.f sswap.f zaxpy.f \ 
 zcopy.f zdotc.f zdotu.f zdrot.f zdscal.f zrotg.f zscal.f zswap.f 
Because the Blas library is written in Fortran, I need to declare the Fortran suffix at the beginning of the `Makefile.am' with the `SUFFIXES' assignment and then insert an implicit rule for building object files from fortran files. The variables `F77' and `FFLAGS' are defined by Autoconf, by using the Fortran support provided by Autotools. For C or C++ files there is no need to include implicit rules. We discuss Fortran support at a later chapter.

Another important thing to note is the use of the symbol `$<'. We introduced these symbols in Chapter 2, where we mentioned that `$<' is the dependencies that changed causing the target to need to be rebuilt. If you've been paying attention you may be wondering why we didn't say `$(srcdir)/$<' instead. The reason is because for VPATH builds, `make' is sufficiently intelligent to substitute `$<' with the Right Thing.

Now consider the `Makefile.am' for building a library for solving linear systems of equations in a nearby directory:

* `lin/Makefile.am'
 
SUFFIXES = .f
.f.o:
       $(F77) $(FFLAGS) -c $<
INCLUDES = -I../blas1 -I../mathutil

lib_LIBRARIES = liblin.a
include_HEADERS = lin.h
liblin_a_SOURCES = dgeco.f dgefa.f dgesl.f f2c.h f77-fcn.h lin.h lin.cc

check_PROGRAMS = test1 test2 test3
TESTS = $(check_PROGRAMS)
LDADD = liblin.a ../blas1/libblas1.a ../mathutil/libmathutil.a $(FLIBS) -lm

test1_SOURCES = test1.cc f2c-main.cc
test2_SOURCES = test2.cc f2c-main.cc  
test3_SOURCES = test3.cc f2c-main.cc
In this case, we have a library that contains mixed Fortran and C++ code. We also have an example of a test suite, which in this case contains three test programs. What's new here is that in order to link the test suite properly we need to link in libraries that have been built already in other directories but haven't been installed yet. Because every test program requires to be linked against the same libraries, we set these libraries globally with an `LDADD' assignment for all executables. Because the libraries have not been installed yet we specify them with their full path. This will allow Automake to track dependencies correctly; if `libblas1.a' is modified, it will cause the test suite to be rebuilt. Also the variable `INCLUDES' is globally assigned to make the header files of the other two libraries accessible to the source code in this directory. The variable `$(FLIBS)' is assigned by Autoconf to link the run-time Fortran libraries, and then we link the installed `libm.a' library. Because that library is installed, it must be linked with the `-l' flag. Another peculiarity in this example is the file `f2c-main.cc' which is shared by all three executables. As we will explain later, when you link executables that are derived from mixed Fortran and C or C++ code, then you need to link with the executable this kludge file.

The test-suite files for numerical code will usually invoke the library to perform a computation for which an exact result is known and then verify that the result is true. For non-numerical code, the library will need to be tested in different ways depending on what it does.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.9 Built sources

In some complicated packages, you want to generate part of their source code by executing a program at compile time. For example, in one of the packages that I wrote for an assignment, I had to generate a file `incidence.out' that contained a lot of hairy matrix definitions that were too ugly to just compute and write by hand. That file was then included by `fem.cc' which was part of a library that I wrote to solve simple finite element problems, with a preprocessor statement:
 
#include "incidence.out"
All source code files that are to be generated during compile time should be listed in the global definition of `BUILT_SOURCES'. This will make sure that these files get compiled before anything else. In our example, the file `incidence.out' is computed by running a program called `incidence' which of course also needs to be compiled before it is run. So the `Makefile.am' that we used looked like this:
 
noinst_PROGRAMS = incidence
lib_LIBRARIES = libpmf.a

incidence_SOURCES = incidence.cc mathutil.h
incidence_LDADD = -lm

incidence.out: incidence
      ./incidence > incidence.out

BUILT_SOURCES = incidence.out
libpmf_a_SOURCES = laplace.cc laplace.h fem.cc fem.h mathutil.h

check_PROGRAMS = test1 test2
TESTS = $(check_PROGRAMS)

test1_SOURCES = test1.cc
test1_LDADD = libpmf.a -lm

test2_SOURCES = test2.cc
test2_LDADD = libpmf.a -lm
Note that because the executable `incidence' has been created at compile time, the correct path is `./incidence'. Always keep in mind, that the correct path to source files, such as `incidence.cc' is `$(srcdir)/incidence.cc'. Because the `incidence' program is used temporarily only for the purposes of building the `libpmf.a' library, there is no reason to install it. So, we use the `noinst' prefix to instruct Automake not to install it.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.10 Installation directories.

Previously, we mentioned that the symbols `bin', `lib' and `include' refer to installation locations that are defined respectively by the variables `bindir', `libdir' and `includedir'. For completeness, we will now list the installation locations available by default by Automake and describe their purpose.

All installation locations are placed under one of the following directories:

`prefix'
The default value of `$(prefix)' is `/usr/local' and it is used to construct installation locations for machine-indepedent files. The actual value is specified at configure-time with the `--prefix' argument. For example:
 
configure --prefix=/home/lf
`exec_prefix'
The default value of `$(exec_prefix)' is `$(prefix)' and it used to construct installation location for machine-dependent files. The actual value is specified at configure-time with the `--exec-prefix' argument. For example:
 
configure --prefix=/home/lf --exec-prefix=/home/lf/gnulinux
The purpose of using a seperate location for machine-dependent files is because then it makes it possible to install the software on a networked file server and make that available to machines with different architectures. To do that there must be seperate copies of all the machine-dependent files for each architecture in use.

Executable files are installed in one of the following locations:
 
bindir     = $(exec_prefix)/bin
sbindir    = $(exec_prefix)/sbin
libexecdir = $(exec_prefix)/libexec

`bin'
Executable programs that users can run.
`sbin'
Executable programs for the super-user.
`libexec'
Executable programs to be called by other programs.

Library files are installed under
 
libdir = $(exec_prefix)/lib

Include files are installed under
 
includedir = $(prefix)/include

Data files are installed in one of the following locations:
 
datadir        = $(prefix)/share
sysconfdir     = $(prefix)/etc
sharedstatedir = $(prefix)/com
localstatedir  = $(prefix)/var

`data'
Read-only architecture indepedent data files.
`sysconf'
Read-only configuration files that pertain to a specific machine. All the files in this directory should be ordinary ASCII files.
`sharedstate'
Architecture indepedent data files which programs modify while they run.
`localstate'
Data files which programs modify while they run that pertain to a specific machine.

Autoconf macros should be installed in `$(datadir)/aclocal'. There is no symbol defined for this location, so you need to define it yourself:
 
m4dir = $(datadir)/aclocal

FIXME: Emacs Lisp files?

FIXME: Documentation?

Automake, to encourage tidyness, also provides the following locations such that each package can keep its stuff under its own subdirectory:
 
pkglibdir         = $(libdir)/@PACKAGE@
pkgincludedir     = $(includedir)/@PACKAGE@
pkgdatadir        = $(datadir)/@PACKAGE@
There are a few other such `pkg' locations, but they are not practically useful.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.11 Handling shell scripts

Sometimes you may feel the need to implement some of your programs in a scripting language like Bash or Perl. For example, the `autotools' package is exclusively a collection of shell scripts. Theoretically, a script does not need to be compiled. However, there are still issues pertaining to scripts such as:

To let Automake deal with all this, you need to use the `SCRIPTS' primitive. By listing a file under a `SCRIPTS' primitive assignment, you are telling Automake that this file needs to be built, and must be allowed to be installed in a location where executable files are normally installed. Automake by default will not clean scripts when you invoke the `clean' target. To force Automake to clean all the scripts, you need to add the following line to your `Makefile.am':
 
CLEANFILES = $(bin_SCRIPTS)
You also need to write your own targets for building the script by hand.

For example:

`hello1.sh'
 
# -* bash *-
echo "Howdy, world!"
exit 0
`hello2.pl'
 
# -* perl *-
print "Howdy, world!\n";
exit(0);
`Makefile.am'
 
bin_SCRIPTS = hello1 hello2
CLEANFILES = $(bin_SCRIPTS)
EXTRA_DIST = hello1.sh hello2.pl

hello1: $(srcdir)/hello1.sh
      rm -f hello1
      echo "#! " $(BASH) > hello1
      cat $(srcdir)/hello1.sh >> hello1
      chmod ugo+x hello1

hello2: $(srcdir)/hello2.pl
      $(PERL) -c hello2.pl
      rm -f hello2
      echo "#! " $(PERL) > hello2
      cat $(srcdir)/hello2.pl >> hello2
      chmod ugo+x hello2
`configure.in'
 
AC_INIT
AM_INIT_AUTOMAKE(hello,0.1)
AC_PATH_PROGS(BASH, bash sh)
AC_PATH_PROGS(PERL, perl perl5.004 perl5.003 perl5.002 perl5.001 perl5)
AC_OUTPUT(Makefile)
Note that in the "source" files `hello1.sh' and `hello2.pl' we do not include a line like
 
#!/bin/bash
#!/usr/bin/perl
Instead we let Autoconf pick up the correct path, and then we insert it during make. Since we omit the #! line, we leave a comment instead that indicates what kind of file this is.

In the special case of perl we also invoke
 
perl -c hello2.pl
This checks the perl script for correct syntax. If your scripting language supports this feature I suggest that you use it to catch errors at "compile" time. The AC_PATH_PROGS macro looks for a specific utility and returns the full path.

If you wish to conform to the GNU coding standards, you may want your script to support the --help and --version flags, and you may want --version to pick up the version number from AM_INIT_AUTOMAKE.

Here's an enhanced hello world scripts:

Basically the idea with this approach is that when configure calls AC_OUTPUT it will substitute the files version.sh and version.pl with the correct version information. Then, during building, the version files are merged with the scripts. The scripts themselves need some standard boilerplate code to handle the options. I've included that code here as a sample implementation, which I hereby place in the public domain.

This approach can be easily generalized with other scripting languages as well, like Python and Guile.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.12 Handling other obscurities

To install data files, you should use the `DATA' primitive instead of the `SCRIPTS'. The main difference is that `DATA' will allow you to install files in data installation locations, whereas `SCRIPTS' will only allow you to install files in executable installation locations.

Normally it is assumed that the files listed in `DATA' are not derived, so they are not cleaned. If you do want to derive them however from an executable file, then you can do so like this:

 
bin_PROGRAMS = mkdata
mkdata_SOURCES = mkdata.cc

pkgdata_DATA = thedata
CLEANFILES = $(datadir_DATA)

thedata: mkdata
      ./mkdata > thedata

In general however, data files are boring. You just write them, and list them in a `DATA' assignment:
 
pkgdata_DATA = foo1.dat foo2.dat foo3.dat ...

If your package requires you to edit a certain type of files, you might want to write an Emacs editing mode for that file type. Emacs modes are written in Elisp files that are prefixed with `.el' like in `foo.el'. Automake will byte-compile and install Elisp files using Emacs for you. You need to invoke the
 
AM_PATH_LISPDIR
macro in your `configure.in' and list your Elisp files under the `LISP' primitive:
 
lisp_LISP = mymode.el
The `LISP' primitive also accepts the `noinst' location.

There is also support for installing Autoconf macros, documentation and dealing with shared libraries. These issues however are complicated, and they will be discussed in seperate chapters.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6. Using Autotools

6.1 Introduction  
6.2 Compiler configuration with the LF macros  
6.3 The features of `LF_CPP_PORTABILITY'  
6.4 Writing portable C++  
6.5 Hello world revisited again  
6.6 Invoking `acmkdir'  
6.7 Handling Embedded text  
6.8 Handling very deep packages  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.1 Introduction

At the moment Autotools distributes the following additional utilitities:

We have already discussed the `gpl' utility in Chapter 1. In this chapter we will focus mainly on the LF macros and the `acmkdir' utility but we will postpone our discussion of Fortran support until the next chapter.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.2 Compiler configuration with the LF macros

In last chapter we explained that a minimal `configure.in' file looks like this:
 
AC_INIT
AM_CONFIG_HEADER(config.h)
AM_INIT_AUTOMAKE(package,version)
AC_PROG_CXX
AC_PROG_RANLIB
AC_OUTPUT(Makefile ... )
If you are not building libraries, you can omit AC_PROG_RANLIB.

Alternatively you can use the following macros that are distributed with Autotools, and made accessible through the `aclocal' utility. All of them are prefixed with `LF' to distinguish them from the standard macros:

LF_CONFIGURE_CC
This macro is equivalent to the following invokation:
 
AC_PROG_CC
AC_PROG_CPP
AC_AIX
AC_ISC_POSIX
AC_MINIX
AC_HEADER_STDC
which is a traditional Autoconf idiom for setting up the C compiler.
LF_CONFIGURE_CXX
This macro calls
 
AC_PROG_CXX
AC_PROG_CXXCPP
and then invokes the portability macro:
 
LF_CPP_PORTABILITY
This is the recommended way for configuring your C++ compiler.
LF_HOST_TYPE
This is here mainly because it is required by `LF_CONFIGURE_FORTRAN'. This macro determines your operating system and defines the C preprocessor macro `YOUR_OS' with the answer. You can use this in your program for spiffiness purposes such as when the program identifies itself at the user's request, or during initialization.
LF_CPP_PORTABILITY
This macro allows you to make your `C++' code more portable and a little nicer. If you must call this macro, do so after calling `LF_CONFIGURE_CXX'. We describe the features in more detail in the next section. To take advantage of these features, all you have to do is
 
#include <config.h>
In the past it used to be necessary to have to include a file called `cpp.h'. I've sent this file straight to hell.
LF_SET_WARNINGS
This macro enables you to activate warnings at configure time. If called, then the user can request warnings by passing the `--with-warnings' flag to the compiler like this:
 
$ configure ... --with-warnings ...
Warnings can help you find out many bugs, as well as help you improve your coding habits. On the other hand, in many cases, many of these warnings are false alarms, which is why the default behaviour of the compiler is to not show them to you. You are probably interested in warnings if you are the developer, or a paranoid end-user.
The minimal recommended `configure.in' file for a pure C++ project is:
 
AC_INIT
AM_CONFIG_HEADER(config.h)
AM_INIT_AUTOMAKE(package,version)
LF_CONFIGURE_CXX
AC_PROG_RANLIB
AC_OUTPUT(Makefile .... )

A full-blown `configure.in' file for projects that mix Fortran and C++ (and may need the C compiler also if using `f2c') invokes all of the above macros:
 
AC_INIT
AM_INIT_AUTOMAKE(package,version)
LF_CANONICAL_HOST
LF_CONFIGURE_CC
LF_CONFIGURE_CXX
LF_CONFIGURE_FORTRAN
LF_SET_WARNINGS
AC_PROG_RANLIB
AC_CONFIG_SUBDIRS(fortran/f2c fortran/libf2c)
AC_OUTPUT(Makefile ...)


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.3 The features of `LF_CPP_PORTABILITY'

In order for LF_CPP_PORTABILITY to work correctly you need to append certain things at the bottom of your `acconfig.h'. This is done for you automatically by acmkdir. When the LF_CPP_PORTABILITY macro is invoked by `configure.in' then the following portability problems are checked:

In addition to these workarounds, the following additional features are introduced at the end of the default acconfig.h. The features are enabled only if your `configure.in' calls LF_CPP_PORTABILITY.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.4 Writing portable C++

The C++ language has been standardized very recently. As a result, not all compilers fully support all the features that the ANSI C++ standard requires, including the g++ compiler itself. Some of the problems commonly encountered, such as incorrect scoping in for-loops and lack of the bool data type can be easily worked around. In this section we give some tips for avoiding more portability problems. I welcome people on the net reading this to email me their tips, to be included in this tutorial.

FIXME: I need to add some stuff here.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.5 Hello world revisited again

Putting all of this together, we will now show you how to create a super Hello World package, using the LF macros and the utilities that are distributed with the `autotools' distribution.

The first step is to build a directory tree for the new project. Instead of doing it by hand, use the `acmkdir' utility. Type:
 
% acmkdir hello
`acmkdir' prompts you with the current directory pathname. Make sure that this is indeed the directory where you want to install the directory tree for the new package. You will be prompted for some information about the newly created package. When you are done, `acmkdir' will ask you if you really want to go for it. Say `y'. Then `acmkdir' will do the following:

It must be obvious that having to do these tasks manually for every package you write can get to be tiring. With `acmkdir' you can slap together all this grunt-work in a matter of seconds.

Now enter the directory `hello-0.1/src' and start coding:
 
% cd hello-0.1/src
% gpl -cc hello.cc
% vi hello.cc
% vi Makefile.am
This time we will use the following modified hello world program:
 
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif

#include <iostream.h>

main()
{
 cout << "Welcome to " << PACKAGE << " version " << VERSION;
 cout << " for " << YOUR_OS << endl;
 cout << "Hello World!" << endl;
}
and for `Makefile.am' the same old thing:
 
bin_PROGRAMS = hello
hello_SOURCES = hello.cc 
Now back to the toplevel directory:
 
% cd ..
% reconf
% configure
% make
% src/hello
Welcome to test version 0.1 for i486-pc-linux-gnulibc1
Hello World!
Note that by using the special macros PACKAGE, VERSION, YOUR_OS the program can identify itself, its version number and the operating system for which it was compiled. The PACKAGE and VERSION are defined by AM_INIT_AUTOMAKE and YOUR_OS by LF_HOST_TYPE.

Now you can experiment with the various options that configure offers. You can do:
 
% make distclean
and reconfigure the package with one of the following variations in options:
 
% configure --disable-assert
% configure --with-warnings
or a combination of the above. You can also build a distribution of your hello world and feel cool about yourself:
 
% make distcheck
The important thing is that you can write extensive programs like this and stay focused on writing code instead of maintaining stupid header file, scripts, makefiles and all that.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.6 Invoking `acmkdir'

The `acmkdir' utility can be invoked in the simple manner that we showed in the last chapter to prepare the directory tree for writing C++ code. Alternatively, it can be instructed to create directory trees for Fortran/C++ code as well as documentation directories.

In general, you invoke `acmkdir' in the following manner:
 
% acmkdir [OPTIONS] "dirname"
If you are creating a toplevel directory, then everything will appear under `dirname-0.1'. Otherwise, the name `dirname' will be used instead.

`acmkdir' supports the following options:

`--help'
Print a short message reminding the usage of the `acmkdir' command.
`--version'
Print the version information and copyright notice for `acmkdir'.
`-latex'
Instruct `acmkdir' to create a latex documentation directory (see section 9.5 Writing documentation with LaTeX). If your package will have more than one documentation texts, you usually want to invoke this under the `doc' subdirectory:
 
% cd doc
% acmkdir -latex tutorial
% acmkdir -latex manual
Of course, the `Makefile.am' under the `doc' directory will need to refer to these subdirectories with a SUBDIRS entry:
 
SUBDIRS = tutorial manual
Alternatively, if you decide to use the `doc' directory itself for documentation (and you are massively sure about this), then you can
 
% rm -rf doc
% acmkdir -latex doc
You should use this feature if you wish to typeset your documentation using LaTeX instead of Texinfo. The disadvantage of using `latex' for your documentation is that you can only produce a printed book; you can not also generate on-line documentation. The advantage is that you can typeset very complex mathematics, something which you can not do under Texinfo since it only uses plain TeX. If you are documentating mathematical software, you may prefer to write the documentation in Latex. Autotools will provide you with LaTeX macros to make your printed documentation look like Texinfo printed documentation.
`-t, --type=TYPE'
Instruct `acmkdir' to create a top-level directory of type TYPE. The types available are: default, traditional, fortran. Eventually I may implement two additional types: f77, f90.

Now, a brief description of these toplevel types:

default
This is the default type of toplevel directory. It is intended for C++ programs and uses the LF macros installed by Autotools. The `acconfig.h' file is automagically generated and a custom `INSTALL' file is installed. The defaults reflect my own personal habits.
traditional
This is much closer to the FSF default habits. The default language is C, the traditional Autoconf macros are used and the `acconfig.h' file is not automatically generated, except for adding the lines
 
#undef PACKAGE
#undef VERSION
which are required by Automake.
fortran
This is a rather complicated type. It is intended for programs that mix C++ and Fortran. It installs an appropriate `configure.in', and creates an entire directory under the toplevel directory called `fortran'. In that directory, there's installed a copy of the f2c translator. The software is configured such that if a Fortran compiler is not available, f2c is built instead, and then used to compile the Fortran code. We will explain all about Fortran in the next chapter.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.7 Handling Embedded text

In some cases, we want to embed text to the executable file of an application. This may be on-line help pages, or it may be a script of some sort that we intend to execute by an interpreter library that we are linking with, like Guile or Tcl. Whatever the reason, if we want to compile the application as a stand-alone executable, it is necessary to embed the text in the source code. Autotools provides with the build tools necessary to do this painlessly.

As a tutorial example, we will write a simple program that prints the contents of the GNU General Public License. First create the directory tree for the program:
 
% acmkdir copyleft
Enter the directory and create a copy of the txtc compiler:
 
% cd copyleft-0.1
% mktxtc
Then edit the file `configure.in' and add a call to the LF_PROG_TXTC macro. This macro depends on
 
AC_PROG_CC
AC_PROG_AWK
so make sure that these are invoked also. Finally add `txtc.sh' to your AC_OUTPUT. The end-result should look like this:
 
AC_INIT(reconf)
AM_CONFIG_HEADER(config.h)
AM_INIT_AUTOMAKE(copyleft,0.1)
LF_HOST_TYPE
LF_CONFIGURE_CC
LF_CONFIGURE_CXX
LF_SET_OPTIMIZATION
LF_SET_WARNINGS
AC_PROG_RANLIB
AC_PROG_AWK
LF_PROG_TXTC
AC_OUTPUT(Makefile txtc.sh doc/Makefile m4/Makefile src/Makefile)
Then, enter the `src' directory and create the following files:
 
% cd src
% gpl -l gpl.txt
% gpl -cc gpl.h
% gpl -cc copyleft.cc
The `gpl.txt' file is the text that we want to print. You can substitute it with any text you want. This file will be compiled into `gpl.o' during the build process. The `gpl.h' file is a header file that gives access to the symbols defined by `gpl.o'. The file `copyleft.cc' is where the main will be written.

Next, add content to these files as follows:

gpl.h
 
extern int gpl_txt_length;
extern char *gpl_txt[];
copyleft.cc
 
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif

#include <iostream.h>
#include "gpl.h"
 
main()
{
 loop(i,1,gpl_txt_length)
 { cout << gpl_txt[i] << endl; }
}
Makefile.am
 
SUFFIXES = .txt
.txt.o:
       $(TXTC) $<
 
bin_PROGRAMS = copyleft
foo_SOURCES = copyleft.cc gpl.h gpl.txt
and now you're set to build. Go back to the toplevel directory and go for it:
 
$ cd ..
$ reconf
$ configure
$ make
$ src/copyleft | less
To verify that this works properly, do the following:
 
$ cd src
$ copyleft > copyleft.out 
$ diff gpl.txt copyleft.out
The two files should be identical. Finally, convince yourself that you can make a distribution:
 
$ make distcheck
and there you are.

Note that in general the text file, as encoded by the text compiler, will not be always identical to the original. There is one and only one modification being made: If any line has any blank spaces at the end, they are trimmed off. This feature was introduced to deal with a bug in the Tcl interpreter, and it is in general a good idea since it conserves a few bytes, it never hurts, and additional whitespace at the end of a line shouldn't really be there.

This magic is put together from many different directions. It begins with the LF_PROG_TXTC macro:

LF_PROG_TXTC
This macro will define the variable TXTC to point to a Text-to-C compiler. To create a copy of the compiler at the toplevel directory of your source code, use the mktxtc command:
 
% mktxtc
The compiler is implemented as a shell script, and it depends on sed, awk and the C compiler, so you should call the following two macros before invoking AC_PROG_TXTC:
 
AC_PROG_CC
AC_PROG_AWK
The compiler is intended to be used as follows:
 
$(TXTC) text1.txt text2.txt text3.txt ...
such that given the files `text1.txt', `text2.txt', etc. object files `text1.o', `text2.o', etc, are generated that contains the text from these files.
From the Automake point of view, you need to add the following two lines to Automake:
 
SUFFIXES = .txt
.txt.o:
        $(TXTC) $<
assuming that your text files will end in the .txt suffix. The first line informs Automake that there exist source files using non-standard suffixes. Then we describe, in terms of an abstract Makefile rule, how to build an object file from these non-standard suffixes. Recall the use of the symbol $<. Also note that it is not necessary to use $(srcdir) on $< for VPATH builds. If you embed more than one type of files, then you may want to use more than one suffixes. For example, you may have `.hlp' files containing online help and `.scm' files containing Guile code. Then you want to write a rule for each suffix as follows:
 
SUFFIXES = .hlp .scm
.hlp.o:
        $(TXTC) $<
.scm.o:
        $(TXTC) $<
It is important to put these lines before mentioning any SOURCES assignments. Automake is smart enough to parse these abstract makefile rules and recognize that files ending in these suffixes are valid source code that can be built to object code. This allows you to simply list `gpl.txt' with the other source files in the SOURCES assignment:
 
copyleft_SOURCES = copyleft.cc gpl.h gpl.txt
In order for this to work however, Automake must be able to see your abstract rules first.

When you "compile" a text file `foo.txt' this makes an object file that defines the following two symbols:
 
int foo_txt_length;
char *foo_txt[];
Note that the dot characters are converted into underscores. To make these symbols accessible, you need to define an appropriate header file with the following general form:
 
extern int foo_txt_length; 
extern char *foo_txt[];
When you include this header file into your other C or C++ files then:

and that's all there is to it.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.8 Handling very deep packages

When making a package, you can organize it as a flat package or a deep package. In a flat package, all the source files are placed under src without any subdirectory structure. In a deep package, libraries and groups of executables are seperated by a subdirectory structure. The perennial problem with deep packages is dealing with interdirectory dependencies. What do you do if to compile one library you need header files from another library in another directory? What do you do if to compile the test suite of your library you need to link in another library that has just been compiled in a different directory?

One approach is to just put all these interdependent things in the same directory. This is not very unreasonable since the Makefile.am can document quite thoroughly where each file belongs, in case you need to split them up in the future. On the other hand, this solution becomes less and less preferable as your project grows. You may not want to clutter a directory with source code for too many different things. What do you do then?

The second approach is to be careful about these dependencies and just invoke the necessary features of Automake to make everything work out.

For *.a files (library binaries), the recommended thing to do is to link them by giving the full relative pathname. Doing that allows Automake to work out the dependencies correctly accross multiple directories. It also allows you to easily upgrade to shared libraries with Libtool. To retain some flexibility it may be best to list these interdirectory link sequences in variables and then use these variables. This way, when you move things around you minimize the amount of editing you have to do. In fact, if all you need these library binaries for is to build a test suite you can simply assign them to LDFLAGS. To make these assignments more uniform, you may want to start your pathnames with $(top_builddir).

For *.h files (header files), you can include an
 
INCLUDES = -I../dir1 -I../dir2 -I../dir3 ...
assignment on every `Makefile.am' of every directory level listing the directories that contain include files that you want to use. If your directory tree is very complicated, you may want to make these assignments more uniform by starting your pathnames from $(top_srcdir). In your source code, you should use the syntax
 
#include "foo.h"
for include files in the current directory and
 
#include <foo.h>
for include files in other directories.

There is a better third approach, provided by Autotools, but it only applies to include files. There is nothing more that can be done with library binaries; you simply have to give the path. But with header files, it is possible to arrange at configure-time that all header files are symlinked under the directory $(top_builddir)/include. Then you will only need to list one directory instead of many.

Autotools provides two Autoconf macros: LF_LINK_HEADERS and LF_SET_INCLUDES, to handle this symlinking.

LF_LINK_HEADERS
This macro links the public header files under a certain set of directories under an include directory from the toplevel. A simple way to invoke this macro is by listing the set of directories that contain public header files:
 
LF_LINK_HEADERS(src/dir1 src/dir2 src/dir3 ... src/dirN)
When this macro is invoked for the first time, the directory `$(top_srcdir)/include' is erased. Then for each directory `src/dirK' listed, we look for the file `src/dirK/Headers' and link the public header files mentioned in that file under `$(top_srcdir)/include'. The link will be either symbolic or hard, depending on the capabilities of your operating system. If possible, a symbolic link will be prefered.

You can invoke the same macro by passing an optional argument that specifies a directory name. For example:
 
LF_LINK_HEADERS(src/dir1 src/dir2 ... src/dirN , foo)
Then the symlinks will be created under the `$(top_srcdir)/include/foo' directory instead. This can be significantly useful if you have very many header files to install and you'd like to call them something like:
 
#include <foo/file1.h>
During compilation, when you try to

LF_SET_INCLUDES
This macro will cause the `Makefile.am' variable $(default_includes) to contain the correct collection of -I flags, such that the include files are accessible. If you invoke it with no arguments as
 
LF_SET_INCLUDES
then the following assignment will take place:
 
default_includes = -I$(prefix) -I$(top_srcdir)/include
If you invoke it with arguments:
 
LF_SET_INCLUDES(dir1 dir2 ... dirN)
then the following assignment will take place instead:
 
default_includes = -I$(prefix) -I$(top_srcdir)/include/dir1 \
                   -I$(top_srcdir)/include/dir2 ...         \
                   -I$(top_srcdir)/include/dirN
You may use this variable as part of your INCLUDES assignment in your `Makefile.am' like this:
 
INCLUDES = $(default_includes)
If your distribution has a `lib' directory, in which you install various codelets and header files, then a path to that library is added to default_includes also. In that case, you have one of the following:
 
default_includes = -I$(prefix) -I$(top_srcdir)/lib -I$(top_srcdir)/include
or
 
default_includes = -I$(prefix) -I$(top_srcdir)/lib \
                   -I$(top_srcdir)/include/dir1 ... \
                   -I$)top_srcdir)/include/dirN

A typical use of this system involves invoking
 
LF_LINK_HEADERS(src/dir1 src/dir2 ... src/dirN)
LF_SET_INCLUDES
in your `configure.in' and adding the following two lines in your `Makefile.am':
 
INCLUDES = $(default_includes)
EXTRA_DIST = Headers
The variable $(default_includes) will be assigned by the configure script to point to the Right Thing. You will also need to include a file called `Headers' in every directory level that you mention in LF_LINK_HEADERS containing the public header files that you wish to symlink. The filenames need to be separated by carriage returns in the `Headers' file. You also need to mention these public header files in a
 
include_HEADERS = foo1.h foo2.h ...
assignment, in your `Makefile.am', to make sure that they are installed.

With this usage, other programs can access the installed header files as:
 
#include <foo1.h>
Other directories within the same package can access the uninstalled yet header files in exactly the same manner. Finally, in the same directory you should access the header files as
 
#include "foo1.h"
This will force the header file in the current directory to be installed, even when there is a similar header file already installed. This is very important when you are rebuilding a new version of an already installed library. Otherwise, building might be confused if your code tries to include the already installed, and not up-to-date, header files from the older version.

Alternatively, you can categorize the header files under a directory, by invoking
 
LF_LINK_HEADERS(src/dir1 src/dir2 , name1)
LF_LINK_HEADERS(src/dir3 src/dir4 , name2)
LF_SET_INCLUDES(name1 name2)
in your `configure.in'. In your `Makefile.am' files you still add the same two lines:
 
INCLUDES = $(default_includes)
EXTRA_DIST = Headers
and maintain the `Headers' file as before. However, now the header files will be symlinked to subdirectories of `$(top_srcdir)/include'. This means that although uninstalled header files in all directories must be included by code in the same directory as:
 
#include "header.h"
code in other directories must access these uninstalled header files as
 
#include <name1/header.h>
if the header file is under `src/dir1' or `src/dir2' or as
 
#include <name2/header.h>
if the header file is under `src/dir3' or `src/dir4'. It follows that you probably intend for these header files to be installed correspondingly in such a manner so that other programs can also include them the same way. To accomplish that, under `src/dir1' and `src/dir2' you should list the header files in your `Makefile.am' like this:
 
name1dir = $(includedir)/name1
name1_HEADERS = header.h ...
and under `src/dir3' and `src/dir4' like this:
 
name2dir = $(includedir)/name2
name2_HEADERS = header.h

One disadvantage of this approach is that the source tree is modified during configure-time, even during a VPATH build. Some may not like that, but it suits me just fine. Unfortunately, because Automake requires the GNU compiler to compute dependencies, the header files need to be placed in a constant location with respect to the rest of the source code. If a mkdep utility were to be distributed by Automake to compute dependencies when the installer installs the software and not when the developer builds a source code distribution, then it would be possible to allow the location of the header files to be dynamic. If that development ever takes place in Automake, Autotools will immediate follow. If you really don't like this, then don't use this feature.

Usually, if you are installing one or two header files per library you want them to be installed under $(includedir) and be includable with
 
#include <foo.h>
On the other hand, there are many applications that install a lot of header files, just for one library. In that case, you should put them under a prefix and let them be included as:
 
#include <prefix/foo.h>
Examples of libraries doing this X11 and Mesa.

This mechanism for tracking include files is most useful for very large projects. You may not want to bother for simple homework-like throwaway hacks. When a project starts to grow, it is very easy to switch.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7. C++ and Autoconf

In this chapter I will discuss in extreme detail the portability issues with C++. Most of this work will be based on bzconfig which I will adapt to include in Autotools eventually. I don't know the structure of this chapter yet.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8. Fortran with Autoconf

8.1 Introduction to Fortran support  
8.2 Fortran compilers and linkage  
8.3 Walkthrough a simple example  
8.4 The gory details  
8.5 Portability problems with Fortran  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8.1 Introduction to Fortran support

This chapter is devoted to Fortran. We will show you how to build programs that combine Fortran and C or C++ code in a portable manner. The main reason for wanting to do this is because there is a lot of free software written in Fortran. If you browse `http://www.netlib.org/' you will find a repository of lots of old, archaic, but very reliable free sources. These programs encapsulate a lot of experience in numerical analysis research over the last couple of decades, which is crucial to getting work done. All of these sources have been written in Fortran. As a developer today, if you know other programming languages, it is unlikely that you will want to write original code in Fortran. You may need, however, to use legacy Fortran code, or the code of a neighbour who still writes in Fortran.

The most portable way to mix Fortran with your C/C++ programs is to translate the Fortran code to C with the `f2c' compiler and compile everything with a C/C++ compiler. The `f2c' compiler is available at `http://www.netlib.org/' but as we will soon explain, it is also distributed with the `autotools' package. Another alternative is to use the GNU Fortran compiler `g77' with `g++' and `gcc'. This compiler is portable among many platforms, so if you want to use a native Fortran compiler without sacrificing portability, this is one way to do it. Another way is to use your OS's native Fortran compiler, which is usually called `f77', if it is compatible with `g77' and `f77'. Because performance is also very important in numerical codes, a good strategy is to prefer to use the native compiler if it is compatible, and support `g77' as a fall-back option. Because many sysadmins don't install `g77' supporting `f2c' as a third fall-back is also a good idea.

Autotools provides support for configuring and building source code written in part or in whole in Fortran. The implementation is based on the build system used by GNU Octave, which has been generalized for use by any program.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8.2 Fortran compilers and linkage

The traditional Hello world program in Fortran looks like this:
 
c....:++++++++++++++=
      PROGRAM MAIN
      PRINT*,'Hello World!'
      END
All lines that begin with `c' are comments. The first line is the equivalent of main() in C. The second line says hello, and the third line indicates the end of the code. It is important that all command lines are indented by 7 spaces, otherwise the compiler will issue a syntax error. Also, if you want to be ANSI compliant, you must write your code all in caps. Nowadays most compilers don't care, but some may still do.

To compile this with `g77' (or `f77') you do something like:
 
% g77 -o hello hello.f
% hello
To compile it with the f2c translator:
 
% f2c hello.f
% gcc -o hello hello.c -lf2c -lm
where `-lf2c' links in the translator's system library. In order for this to work, you will have to make sure that the header file f2c.h is present since the translated code in `hello.c' includes it with a statement like
 
#include "f2c.h"
which explicitly requires it to be present in the current working directory.

In this case, the `main' is written in Fortran. However most of the Fortran you will be using will actually be subroutines and functions. A subroutine looks like this:
 
c....:++++++++++++++
      SUBROUTINE FHELLO (C)
      CHARACTER *(*) C
      PRINT*,'From Fortran: ',C
      RETURN
      END
This is the analog of a `void' function in C, because it takes arguments but doesn't return anything. The prototype declaration is K&R style: you list all the arguments in parenthesis, seperated with commas, and you declare the types of the variables in the subsequent lines.

Suppose that this subroutine is saved as `fhello.f'. To call it from C you need to know what it looks like from the point of the C compiler. To find out type:
 
% f2c -P fhello.f
% cat fhello.P
You will find that this subroutine has the following prototype declaration:
 
extern int fhello_(char *c__, ftnlen c_len);
It may come as a surprise, and this is a moment of revelation, but although in Fortran it appears that the subroutine is taking one argument, in C it appears that it takes two! And this is what makes it difficult to link code in a portable manner between C and Fortran. In C, everything is what it appears to be. If a function takes two arguments, then this means that down to the machine language level, there is two arguments that are being passed around. In Fortran, things are being hidden from you and done in a magic fashion. The Fortran programmer thinks that he is passing one argument, but the compiler compiles code that actually passes two arguments around. In this particular case, the reason for this is that the argument you are passing is a string. In Fortran, strings are not null-terminated, so the `f2c' compiler passes the length of the string as an extra hidden argument. This is called the linkage method of the compiler. Unfortunately, linkage in Fortran is not standard, and there exist compilers that handle strings differently. For example, some compilers will prepend the string with a few bytes containing the length and pass a pointer to the whole thing. This problem is not limitted to strings. It happens in many other instances. The `f2c' and `g77' compilers follow compatible linkage, and we will use this linkage as the ad-hoc standard. A few proprietary Fortran compilers like the Dec Alpha `f77' and the Irix `f77' are also `f2c'-compatible. The reason for this is because most of the compiler developers derived their code from `f2c'. So although a standard was not really intended, there we have one anyway.

A few things to note about the above prototype declaration is that the symbol `fhello' is in lower-case, even though in Fortran we write everything uppercase, and it is appended with an underscore. On some platforms, the proprietary Fortran compiler deviates from the `f2c' standard either by forcing the name to be in upper-case or by omitting the underscore. Fortunately, these cases can be detected with Autoconf and can be worked around with conditional compilation. However, beyond this, other portability problems, such as the strings issue, are too involved to deal with and it is best in these cases that you fall back to `f2c' or `g77'. A final thing to note is that although `fhello' doesn't return anything, it has return type `int' and not `void'. The reason for this is that `int' is the default return type for functions that are not declared. Therefore, to prevent compilation problems, in case the user forgets to declare a Fortran function, `f2c' uses `int' as the return type for subroutines.

In Fortran parlance, a subroutine is what we'd call a `void' function. To Fortran programmers in order for something to be a function it has to return something back. This reflects on the syntax. For example, here's a function that adds two numbers and returns the result:
 
c....:++++++++++++++++
      DOUBLE PRECISION FUNCTION ADD(A,B)
      DOUBLE PRECISION A,B
      ADD = A + B
      RETURN
      END
The name of the function is also the name of the return variable. If you run this one through `f2c -P' you will find that the C prototype is:
 
extern doublereal add_(doublereal *a, doublereal *b);
There's plenty of things to note here:

A more interesting case is when we deal with complex numbers. Consider a function that multiplies two complex numbers:
 
c....:++++++++++++++++++++++++++++++
      COMPLEX*16 FUNCTION MULT(A,B)
      COMPLEX*16 A,B
      MULT = A*B
      RETURN
      END
As it turns out, the prototype for this function is:
 
extern Z_f mult_(doublecomplex *ret_val, doublecomplex *a, doublecomplex *b);
Because complex numbers are not a native type in C, they can not be returned efficiently without going through at least one copy. Therefore, for this special case the return value is placed as the first argument in the prototype! Actually despite many people's feelings that Fortran must die, it is still the best tool to use to write optimized functions that are heavy on complex arithmetic.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8.3 Walkthrough a simple example

Now that we have brought up some of the issues about Fortran linkage, let's show you how to work around them. We will write a simple Fortran function, and a C program that calls it, and then show you how to turn these two into a GNU-like package, enhanced with a configure script and the works. This discussion assumes that you have installed the utilities in `autotools', the package with which this tutorial is being distributed.

First, begin by building a directory for your new package. Because this project will involve Fortran, you need to pass the `-f' flag to `acmkdir':
 
% acmkdir -t fortran foo
The `-t' flag directs `acmkdir' to unpack a copy of the `f2c' translator and to build proper toplevel `configure.in' and `Makefile.am' files. This will take a while, so relax and stretch a little bit.

Now enter the `foo-0.1' directory and look around:
 
% cd foo-0.1
% cat configure.in
AC_INIT
AM_CONFIG_HEADER(config.h)
AM_INIT_AUTOMAKE(hello,0.1)
LF_CONFIGURE_CC
LF_CONFIGURE_CXX
AC_PROG_RANLIB
LF_HOST_TYPE
LF_PROG_F77_PREFER_F2C_COMPATIBILITY
dnl LF_PROG_F77_PREFER_NATIVE_VERSION
LF_PROG_F77
LF_SET_WARNINGS
AC_CONFIG_SUBDIRS(fortran/f2c fortran/libf2c)
AC_OUTPUT([Makefile fortran/Makefile f2c_comp
        doc/Makefile m4/Makefile src/Makefile ])
   
% cat Makefile.am
EXTRA_DIST = reconf configure
SUBDIRS = fortran m4 doc src
There are some new macros in `configure.in' and a new subdirectory: `fortran'. There is also a file that looks like a shell script called `f2c_comp.in'. We will discuss the gory details about all this in the next section. Now let's write the code. Enter the `src' directory and type:
 
$ cd src
$ mkf2c
This creates the following files:

`f2c.h'
This is the header file that we alluded to in the previous section. It needs to be present on all directory levels that contain Fortran code. It defines all the funny typenames that appear in `f2c' compatible prototype declarations.
`f2c-main.c'
This file contains some silly definitions. You need to link it in whenever you link together a program, but don't add it to various libraries, because then, when you link some of the libraries together you will get error messages for duplicate symbols. The contents of this file are:
 
#ifdef __cplusplus
extern "C" {
#endif

#if defined (sun)
int MAIN_ () { return 0; }
#elif defined (linux) && defined(__ELF__)
int MAIN__ () { return 0; }
#endif

#ifdef __cplusplus
}
#endif
Now, time to write some code:
 
$ vi fhello.f
$ vi hello.cc
with
`fhello.f'
 
c....:++++++++++++++++++++++++++++++
      SUBROUTINE FHELLO (C)
      CHARACTER *(*) C
      PRINT*,'From Fortran: ',C
      RETURN
      END
`hello.cc'
 
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif
#include <string.h>
#include "f2c.h"
#include "f77-fcn.h"

extern "C"
{
 extern int f77func(fhello,FHELLO)(char *c__, ftnlen c_len);
}

main()
{
 char s[30];
 strcpy(s,"Hello world!");
 f77func(fhello,FHELLO)(s,ftnlen(strlen(s)));
}
The definition of the f77func macro is included in `acconfig.h' automatically for you if the LF_CONFIGURE_FORTRAN macro is included in your `configure.in'. The definition is as follows:
 
#ifndef f77func
#if defined (F77_APPEND_UNDERSCORE)
#  if defined (F77_UPPERCASE_NAMES)
#    define f77func(f, F) F##_
#  else
#    define f77func(f, F) f##_
#  endif
#else
#  if defined (F77_UPPERCASE_NAMES)
#    define f77func(f, F) F
#  else
#    define f77func(f, F) f
#  endif
#endif
#endif
Recall that we said that the issue of whether to add an underscore and whether to capitalize the name of the routine can be dealt with conditional compilation. This macro is where this conditional compilation happens. The LF_PROG_F77 macro will define
 
F77_APPEND_UNDERSCORE
F77_UPPERCASE_NAMES
appropriately so that f77func does the right thing.

To compile this, create a `Makefile.am' as follows:
 
SUFFIXES = .f
.f.o:
        $(F77) -c $<
         
bin_PROGRAMS = hello
hello_SOURCES = hello.cc fhello.f f2c.h f2c-main.c
hello_LDADD = $(FLIBS)
Note that the above `Makefile.am' is only compatible with version 1.3 of Automake, or newer versions. The previous versions don't grok Fortran filenames on the `hello_SOURCES' so you may want to upgrade.

Now you can compile and run the program:
 
$ cd ..
$ reconf
$ configure
$ make
$ src/hello
 From Fortran: Hello world!
If you have a native `f77' compiler that was used, or the portable `g77' compiler you missed out the coolness of using `f2c'. In order to check that out do:
 
$ make distclean
$ configure --with-f2c
$ make
and witness the beauty! The package will begin by building an `f2c' binary for your system. Then it will build the Fortran libraries. And finally, it will build the hello world program which you can run as before:
 
$ src/hello
It may seem an overkill to carry around a Fortran compiler. On the other hand you will find it very convenient, and the `f2c' compiler isn't really that big. If you are spoiled on a system that is well equiped and with a good system administrator, you may find it a nasty surprise one day when you discover that the rest of the world is not necessarily like that.

If you download a real Fortran package from Netlib you might find it very annoying having to enter the filenames for all the Fortran files in `*_SOURCES'. A work-around is to put all these files in their own directory and then do this awk trick:
 
% ls *.f | awk '{ printf("%s ", $1) }' > tmp
The awk filter will line-up the output of ls in one line. You can use your editor to insert its contents to your `Makefile.am'. Eventually I may come around to write a utility for doing this automagically.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8.4 The gory details

The best way to get started is by building the initial directory tree with `acmkdir' like this:
 
% acmkdir -t fortran <directory-filename>
This will install all the standard stuff. It will also install a directory called `fortran' containing a copy of the f2c compiler and `f2c_comp', a shell script invoking the compiler in a way that it looks the same as invoking a real compiler

The file `configure.in' uses the following special macros:

LF_PROG_F77_PREFER_F2C_COMPATIBILITY
This macro directs Autoconf that the user prefers f2c compatibility over performance. In general Fortran programmers are willing to sacrifice everything for the sake of performance. However, if you want to use Fortran code with C and C++ code, you will have many reasons to also give importance to f2c compatibility. Use this macro to state this preference. The effect is that if the installer's platform has a native Fortran compiler installed, it will be used only if it is f2c compatible. This macro must be invoked before invoking LF_PROG_F77.
LF_PROG_F77_PREFER_NATIVE_VERSION
This macro directs Autoconf that the user prefers performance and doesn't care about f2c compatibility. You may want to invoke this instead if your entire program is written in Fortran. This macro must be invoked before invoking LF_PROG_F77.
LF_PROG_F77
This macro probes the installer platform for an appropriate Fortran compiler. It exports the following variables to Automake:
`F77'
The name of the Fortran compiler
`FFLAGS'
Flags for the Fortran compiler
`FLIBS'
The link sequence for the compiler runtime libraries
It also checks whether the compiler appends underscores to the symbols and whether the symbols are written in lowercase or uppercase characters and defines the following preprocessor macros:
F77_APPEND_UNDERSCORE
Define if the compiler appends an underscore to the symbol names.
F77_UPPERCASE_NAMES
Define if the compiler uses uppercase for symbol names.
These macros are used to define `f77func' macro which takes two arguments; the name of the Fortan subroutine or function in lower case, and then in upper case, and returns the correct symbol name to use for invoking it from C or C++. To obtain the calling sequence for the symbol do:
 
% f2c -P foo.f
on the file containing the subroutine and examine the file `foo.P'. In order for this macro to work properly you must precede it with calls to
 
AC_PROG_CC
AC_PROG_RANLIB
LF_HOST_TYPE
You also need to call one of the two *_PREFER_* macros. The default is to prefer f2c compatibility.
In addition to invoking all of the above, you need to make provision for the bundled fortran compiler by adding the following lines at the end of your `configure.in':
 
AC_CONFIG_SUBDIRS(fortran/f2c fortran/libf2c)
AC_OUTPUT([Makefile fortran/Makefile f2c_comp
           doc/Makefile m4/Makefile src/Makefile])
The AC_CONFIG_SUBDIRS macro directs `configure' to execute the configure scripts in `fortran/f2c' and `fortran/libf2c'. The stuff in AC_OUTPUT that are important to Fortran support are building `fortran/Makefile' and `f2c_comp'. Because, `f2c_comp' is mention in AC_OUTPUT, Automake will automagically bundle it when you build a source code distribution.

If you have originally set up your directory tree for a C or C++ only project and later you realize that you need to also use Fortran, you can upgrade your directory tree to Fortran as follows:

If a directory level contains Fortran source code, then it is important to let Automake know about it by adding the following lines in the beginning.
 
SUFFIXES = .f
.f.o:
        $(F77) -c $<
This is pretty much the same idea with the embedded text compiler. You can list the Fortran source code filenames in the SOURCES assignments together with your C and C++ code. To link executables, you must add $(FLIBS) to LDADD and link against `f2c-main.c' just as in the hello world example. Please do not include `f2c-main.c' in any libraries however.

Now consider the file `hello.cc' line by line. First we include the standard configuration stuff:
 
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif
#include <string.h>
Then we include the Fortran related header files:
 
#include "f2c.h"
Then we declare the prototypes for the Fortran subroutine:
 
extern "C"
{
 extern int f77func(fhello,FHELLO)(char *c__, ftnlen c_len);
}
There is a few things to note here:


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8.5 Portability problems with Fortran

Fortran is infested with portability problems. There exist two important Fortran standards: one that was written in 1966 and one that was written in 1977. The 1977 standard is considered to be the standard Fortran. Most of the Fortran code is written by scientists who have never had any formal training in computer programming. As a result, they often write code that is dependent on vendor-extensions to the standard, and not necessarily easy to port. The standard itself is to blame as well, since it is sorely lacking in many aspects. For example, even though standard Fortran has both REAL and DOUBLE PRECISION data types (corresponding to float and double) the standard only supports single precision complex numbers (COMPLEX). Since many people will also want double precision complex numbers, many vendors provided extensions. Most commonly, the double precision complex number is called COMPLEX*16 but you might also see it called DOUBLE COMPLEX. Other such vendors extensions include providing a flush operation of some sort for file I/O, and other such esoteric things.

To make things worse (or better) now there are two more standards out there: the 1990 standard and the 1995 standard. A 2000 standard is also at work. Fortran 90 and its successors try to make Fortran more like C and C++, and even though there are no free compilers for both variants, they are becoming alarmingly popular with the scientific community. In fact, I think that the main reason why these variants of Fortran are being developed is to make more bussiness for proprietary compiler developers. So far as I know, Fortran 90 does not provide any features that C++ can not support with a class library extension. Moreover Fortran 90 does not have the comprehensive foundation that allows C++ to be a self-extensible language. This makes it less worthwhile to invest effort on Fortran 90, because it means that eventually people will want features that can only be implemented by redefining the language and rewriting the compilers for it. Instead, in C++, you can add features to the language simply by writing C++ code, because it has enough core features to allow virtually unlimited self-extensibility.

If your primary interest is portability and free software, you should stay away from Fortran 90 as well as Fortran 95, until someone writes a free compiler for them. You will be better off developing in C++ and only migrating to Fortran 77 the parts that are performance critical. This way you get the best of both worlds.

On the flip side, if you limit your Fortran code just to number-crunching, then it becomes much easier to write portable code. There are still a few things you should take into account however. Some Fortran code has been written in the archaic 1966 style. An example of such code is the fftpack package from netlib. The main problems with such code are the following:

In general the code in http://www.netlib.org/ is very reliable and portable, but you do need to keep your eyes open for little problems like the above.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9. Maintaining Documentation

9.1 Writing proper manuals  
9.2 Introduction to Texinfo  
9.3 Markup in Texinfo  
9.4 GNU Emacs support for Texinfo  
9.5 Writing documentation with LaTeX  
9.6 Creating a LaTeX package  
9.7 Further reading about LaTeX  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9.1 Writing proper manuals

FIXME: Advice on how to write a good manual General stuff. Reference manual vs user manual. When to write a manual. How to structure a manual. Texinfo vs. Latex Copyright issues.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9.2 Introduction to Texinfo


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9.3 Markup in Texinfo


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9.4 GNU Emacs support for Texinfo


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9.5 Writing documentation with LaTeX


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9.6 Creating a LaTeX package


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9.7 Further reading about LaTeX

The appendices


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

A. Legal issues with Free Software

A.1 Understanding Copyright  
A.2 Other legal concerns  
A.3 Freeing your software  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

A.1 Understanding Copyright

If you are just writing your own programs for your own internal use and you don't plan to redistribute them, you don't really need to worry too much about copyright. However, if you want to give your programs to other people or use programs that were written by other people then copyright issues become relavant. The main reason why `autoconf' and `automake' were developed was to facilitate the distribution of source code for the GNU project by making packages autoconfiguring. So, if you want to use these tools, you probably also want to know something about copyright issues. The following sections will focus primarily on the legal issues surrounding software. For a discussion of the philosophical issues please see B. Philosophical issues. At this point, I should point out that I am not a lawyer, this is not legal advice, and I do not represent the opinions of the Free Software Foundation.

When you create an original work, like a computer program, or a novel, and so on, the government automatically grants you a set of legal rights called copyright. This means that you have the right to forbid others to use, modify and redistribute your work. By default, you have the exclusive right to do these things, and anyone that would like to use, modify or redistribute you work needs to enter an agreement with you. The government grants you this monopoly and limits the freedom of the public because, it is believed, that it will encourage the creation of more works.

Copyright is transferable. This means that you have the right to transfer most your rights, that we call copyright, to another person or organization. When a work is being developed by a team, it makes legal sense to transfer the copyright to a single organization that can then coordinate enforcement of the copyright. In the free software community, some people assign their software to the Free Software Foundation. The arrangement is that copyright is transfered to the FSF. The FSF then grants you all the rights back in the form of a license agreement, and commits itself legally to distributing the work only as free software. If you want to do this, you should contact the FSF for more information. It is not a good idea to assign your copyright to anyone else, unless you know what you are getting into. By assigning you rights to someone and not getting any of those rights back in the form of an agreement, you may place yourself in a position where you are not allowed to use your own work. Unfortunately, if you are employed or a student in a University you have probably already signed many of your rights away. Universities as well as companies like to lay as much claim on any copyrightable work you produce as possible, even work that you do as a hobby that has nothing to do with them.

Copyright covers mainly original works. However it also covers derived works. If you grant permission to someone to modify your code, and person goes ahead and produces a modified version, then that version is derived work and legally the owner of that version is still you. Similarly, if you write a library and person writes a program that links against your library, then the executable is derived work of both the library and the portion that person wrote. As a result, person can only distribute the executable under terms that are consistent with the terms of the library that it is based on.

The concept of derived work is actually very slippery ground. The key to understanding it is that copyright law covers implementations and not algorithms. This means that if you take someone's code, fire up an editor and modiy it, then the result is derived work. If someone else takes the code, writes up an essay describing what it does, gives you the essay and you go ahead and rewrite the code from scratch, then it is not derived work. In fact it will not be derived work, even if by some stroke of luck, the two works are identical, if you can prove to the court that you wrote your version completely from scratch.

Because copyright law is by default restrictive, you must explicitly grant permissions to your users to enable them to use your work when you give them a copy. One way of doing it is by simply granting them permissions, which can be conditional on certain requirements. In the free software community, we standardize on using a legal document, the General Public License to grant such permissions.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

A.2 Other legal concerns

In addition to copyright law, there is another legal beast: the patent law. Unlike copyright, which you own automatically by the act of creating the work, you don't get a patent unless you file an application for it. If approved, the work is published but others must pay you royalties in order to use it in any way.

The problem with patents is that they cover algorithms, and if an algorithm is patented you can't write an implementation for it without a license. What makes it worse is that it is very difficult and expensive to find out whether the algorithms that you use are patented or will be patented in the future. What makes it insane is that the patent office, in its infinite stupidity, has patented algorithms that are very trivial with nothing innovative about them. For example, the use of backing store in a multiprocesing window system, like X11, is covered by patent 4,555,775. In the spring of 1991, the owner of the patent, AT&T, threatened to sue every member of the X Consortium including MIT. Backing store is the idea that the windowing system save the contents of all windows at all times. This way, when a window is covered by another window and then exposed again, it is redrawn by the windowing system, and not the code responsible for the application. Other insane patents include the IBM patent 4,674,040 which covers "cut and paste between files" in a text editor. Recently, a Microsoft backed company called "Wang" took Netscape to court over a patent that covered "bookmarks"! Wang lost. Although most of these patents don't stand a chance in court, the cost of litigation is sufficient to terrorize small bussinesses, non-profit organizations like the Free Software Foundation, as well as individual software developers.

Companies like to use software patents as strategic weapons. They build an arsenal of software patents by trying to pass whatever can get through the Patent Office. Then years later, when another company threatens their interest, they can go through their patent arsenal and sue the other company. So far there have been no patent attacks aimed directly against the free software community. On November of 1998 however two internal memos were leaked from Microsoft about our community. According to these memos, Microsoft perceives the free software community as a competitor and they seem to consider a patent-based attack among other things. We live in interesting times.

An additional legal burden to both copyrights and patents is governmental paranoia over encryption algorithms. According to the US government, a computer program implementing an encryption algorithm is considered munition, therefore export-control laws on munitions apply. What is not allowed under these laws is to export the software outside the borders of the US. The government is pushing the issue by claiming that making encryption software available on the internet is the same thing as exporting it. Zimmermann, the author of a popular encryption program, was sued by the government based on this interpretation of the law. However the government's position was not tested at court because the government decided to drop the charges, after dragging the case for a few years, long enough to send a message of terror to the internet community. The current wisdom seems to be that it is okey to make encryption software available on the net provided that you take strong measures that will prevent foreigners to download your work. It should be noted however that doing so still is taking a legal risk that could land you to federal prison in the company of illegal arms dealers.

It is quite obvious that the government's attitude towards encryption is completely unconstitutional because it violates our inalienable right to freedom of speech. Appearently, it is the current policy of the government that publishing a book that contains the source code for encryption software is legal, but publishing the same content electronically is illegal. The reason why the government maintains such a strange position is because in the past they have tried to suppress even the publication of encryption algorithms in books. When the RSA algorithm was discovered, the NSA attempted to prevent the inventors from publishing their discovery in journals and presenting it at conferences. Judges understand books and conferences, so the government has given up fighting that battle. They still haven't given up on the electronic front however.

Other countries also have restrictive laws against encryption. In certain places you may not be even allowed to run such programs. The reason why governments are so paranoid of encryption is because it is the key to a wide array of technologies that empower the individual citizens to circumvent governmental snooping on their privacy. The US export laws however hurt US bussiness interests, and they are pointless since good encryption software is available on the internet from other countries.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

A.3 Freeing your software

Both copyright and patent laws are being used mainly to destroy our freedom to cooperate with our fellow hackers. By freedom we refer to three things: the freedom to use software, the freedom to modify it and improve it, and the freedom to redistribute it with the modifications and improvements so that the whole community benefits. Combined with the possible default assignment of your rights to an employer or university, the laws can actually interfere even with your ability to write computer programs for a hobby and cooperate with other hackers on that basis!

To defend our freedoms from those who would like to take them from us, the free software community uses the General Public License, also known as the GPL. In broads strokes, the GPL does the following:

The purpose of the GPL is to use the copyright law to encourage a world in which software is not copyrighted. If copyright didn't cover software, then we would all be free to use, modify and redistribute software, and we would not be able to restrict others from enjoying these freedoms because there would be no law giving anyone such power. One way to grant the freedoms to the users of your software is to revoke your copyright on the software completely. This is called putting your work in the public domain. The problem with this is that it only grants the freedoms. It does not create the reality in which no-one can take these freedoms away from derived works. In fact the copyright law covers by default derived works regardless of whether the original was public domain or copyrighted. By distributing your work under the GPL, you grant the same freedoms, and at the same time you protect these freedoms from hoarders.

The GNU GPL is a legal instrument that has been designed to create a safe haven in which software can be written free from copyright law encumberence. It allows developers to freely share their work with a friendly community that is also willing to share theirs, and at the same time protect them from being exploited by publishers of proprietary software. Many developers would not contribute to our community without this protection.

To apply the GPL to your programs you need to do the following things:

If you are unfamiliar with all this legalese you may find it surprising; you might even find it stupid. This is a very natural reaction. Until 1980, software copyright was not taken seriously in the US. In fact copyrights then had to be registered in order to be valid, and it was very natural for people to just copy software around, even though they knew it was illegal. It took significant amounts of lobbying and propaganda by proprietary publishers to cultivate the current litigious paranoia over copyrights and "convince" the public that helping out their neighbour by giving them an unauthorized copy is not only illegal, but it also "morally wrong". Even though copyright laws are international, through treaties, there are many countries in the world, where this brainwashing hasn't yet taken place, and where people still make unauthorized copies of software for their friends with no second thoughts. Such people are being described with smear words like "pirates" by publishers and their lawyers, but it is not true that these people do what they do because of a malicious intent. These people do what they do, because it is natural for them to be nice and help their friends.

One problem with this attitude is that many of us don't want to disobey the law, because copyright is an indiscriminate weapon that cuts both ways. We prefer therefore to beat the hoarders at their own game. This means that we can not use, modify or distribute programs that are not distributed with a copyright notice and appropriate permissions, because the default status of such programs is that no permissions are granted whatsoever. If you write a program that you want to share with other people, then please apply the terms of the GPL to the copies that you distribute, so that your friends can use, modify and share the program with their friends, without breaking any laws and to protect your contribution to our community from the hoarders. Please do not violate copyright law. Instead, say no to proprietary software and use free software on the free GNU/Linux operating system.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

B. Philosophical issues

The GNU development tools were written primarily to aid the development of free software. The free software movement was born by important philosophical concerns, and it is these concerns that motivate many software developers to contribute their code to our community. In this appendix we include a few articles written by Richard Stallman that discuss these concerns. The text of these articles is copyrighted and included here with permission from the following terms:

Copying Notice
 
Copyright (C) 1998 Free Software Foundation Inc
59 Temple Place, Suite 330, Boston, MA 02111, USA
Verbatim copying and distribution is permitted in any medium,
provided this notice is preserved.

All of these articles, and others are distributed on the web at:
http://www.gnu.org/philosophy/index.html

B.1 Why software should not have owners  
B.2 Why free software needs free documentation  
B.3 Copyleft; Pragmatic Idealism  
B.4 The X Windows Trap  
B.5 Categories of software  
B.6 Confusing words  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

B.1 Why software should not have owners

Digital information technology contributes to the world by making it easier to copy and modify information. Computers promise to make this easier for all of us.

Not everyone wants it to be easier. The system of copyright gives software programs "owners", most of whom aim to withhold software's potential benefit from the rest of the public. They would like to be the only ones who can copy and modify the software that we use.

The copyright system grew up with printing--a technology for mass production copying. Copyright fit in well with this technology because it restricted only the mass producers of copies. It did not take freedom away from readers of books. An ordinary reader, who did not own a printing press, could copy books only with pen and ink, and few readers were sued for that.

Digital technology is more flexible than the printing press: when information has digital form, you can easily copy it to share it with others. This very flexibility makes a bad fit with a system like copyright. That's the reason for the increasingly nasty and draconian measures now used to enforce software copyright. Consider these four practices of the Software Publishers Association (SPA):

All four practices resemble those used in the former Soviet Union, where every copying machine had a guard to prevent forbidden copying, and where individuals had to copy information secretly and pass it from hand to hand as "samizdat". There is of course a difference: the motive for information control in the Soviet Union was political; in the US the motive is profit. But it is the actions that affect us, not the motive. Any attempt to block the sharing of information, no matter why, leads to the same methods and the same harshness.

Owners make several kinds of arguments for giving them the power to control how we use information:

As a computer user today, you may find yourself using a proprietary program. If your friend asks to make a copy, it would be wrong to refuse. Cooperation is more important than copyright. But underground, closet cooperation does not make for a good society. A person should aspire to live an upright life openly with pride, and this means saying "No" to proprietary software.

You deserve to be able to cooperate openly and freely with other people who use software. You deserve to be able to learn how the software works, and to teach your students with it. You deserve to be able to hire your favorite programmer to fix it when it breaks.

You deserve free software.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

B.2 Why free software needs free documentation

The biggest deficiency in free operating systems is not in the software--it is the lack of good free manuals that we can include in these systems. Many of our most important programs do not come with full manuals. Documentation is an essential part of any software package; when an important free software package does not come with a free manual, that is a major gap. We have many such gaps today.

Once upon a time, many years ago, I thought I would learn Perl. I got a copy of a free manual, but I found it hard to read. When I asked Perl users about alternatives, they told me that there were better introductory manuals--but those were not free.

Why was this? The authors of the good manuals had written them for O'Reilly Associates, which published them with restrictive terms--no copying, no modification, source files not available--which exclude them from the free software community.

That wasn't the first time this sort of thing has happened, and (to our community's great loss) it was far from the last. Proprietary manual publishers have enticed a great many authors to restrict their manuals since then. Many times I have heard a GNU user eagerly tell me about a manual that he is writing, with which he expects to help the GNU project--and then had my hopes dashed, as he proceeded to explain that he had signed a contract with a publisher that would restrict it so that we cannot use it.

Given that writing good English is a rare skill among programmers, we can ill afford to lose manuals this way.

Free documentation, like free software, is a matter of freedom, not price. The problem with these manuals was not that O'Reilly Associates charged a price for printed copies--that in itself is fine. (The Free Software Foundation sells printed copies of free GNU manuals, too.) But GNU manuals are available in source code form, while these manuals are available only on paper. GNU manuals come with permission to copy and modify; the Perl manuals do not. These restrictions are the problems.

The criterion for a free manual is pretty much the same as for free software: it is a matter of giving all users certain freedoms. Redistribution (including commercial redistribution) must be permitted, so that the manual can accompany every copy of the program, on-line or on paper. Permission for modification is crucial too.

As a general rule, I don't believe that it is essential for people to have permission to modify all sorts of articles and books. The issues for writings are not necessarily the same as those for software. For example, I don't think you or I are obliged to give permission to modify articles like this one, which describe our actions and our views.

But there is a particular reason why the freedom to modify is crucial for documentation for free software. When people exercise their right to modify the software, and add or change its features, if they are conscientious they will change the manual too--so they can provide accurate and usable documentation with the modified program. A manual which forbids programmers to be conscientious and finish the job, or more precisely requires them to write a new manual from scratch if they change the program, does not fill our community's needs.

While a blanket prohibition on modification is unacceptable, some kinds of limits on the method of modification pose no problem. For example, requirements to preserve the original author's copyright notice, the distribution terms, or the list of authors, are ok. It is also no problem to require modified versions to include notice that they were modified, even to have entire sections that may not be deleted or changed, as long as these sections deal with nontechnical topics. (Some GNU manuals have them.)

These kinds of restrictions are not a problem because, as a practical matter, they don't stop the conscientious programmer from adapting the manual to fit the modified program. In other words, they don't block the free software community from doing its thing with the program and the manual together.

However, it must be possible to modify all the technical content of the manual; otherwise, the restrictions do block the community, the manual is not free, and so we need another manual.

Unfortunately, it is often hard to find someone to write another manual when a proprietary manual exists. The obstacle is that many users think that a proprietary manual is good enough--so they don't see the need to write a free manual. They do not see that the free operating system has a gap that needs filling.

Why do users think that proprietary manuals are good enough? Some have not considered the issue. I hope this article will do something to change that.

Other users consider proprietary manuals acceptable for the same reason so many people consider proprietary software acceptable: they judge in purely practical terms, not using freedom as a criterion. These people are entitled to their opinions, but since those opinions spring from values which do not include freedom, they are no guide for those of us who do value freedom.

Please spread the word about this issue. We continue to lose manuals to proprietary publishing. If we spread the word that proprietary manuals are not sufficient, perhaps the next person who wants to help GNU by writing documentation will realize, before it is too late, that he must above all make it free.

We can also encourage commercial publishers to sell free, copylefted manuals instead of proprietary ones. One way you can help this is to check the distribution terms of a manual before you buy it, and prefer copylefted manuals to non-copylefted ones.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

B.3 Copyleft; Pragmatic Idealism

Every decision a person makes stems from the person's values and goals. People can have many different goals and values; fame, profit, love, survival, fun, and freedom, are just some of the goals that a good person might have. When the goal is to help others as well as oneself, we call that idealism.

My work on free software is motivated by an idealistic goal: spreading freedom and cooperation. I want to encourage free software to spread, replacing proprietary software which forbids cooperation, and thus make our society better.

That's the basic reason why the GNU General Public License is written the way it is--as a copyleft. All code added to a GPL-covered program must be free software, even if it is put in a separate file. I make my code available for use in free software, and not for use in proprietary software, in order to encourage other people who write software to make it free as well. I figure that since proprietary software developers use copyright to stop us from sharing, we cooperators can use copyright to give other cooperators an advantage of their own: they can use our code.

Not everyone who uses the GNU GPL has this goal. Many years ago, a friend of mine was asked to rerelease a copylefted program under non-copyleft terms, and he responded more or less like this:
 
Sometimes I work on free software, and sometimes I work on
proprietary software--but when I work on proprietary software, I
expect to get paid.
He was willing to share his work with a community that shares software, but saw no reason to give a handout to a business. His goal was different from mine, but he decided that the GNU GPL was useful for his goal too.

If you want to accomplish something in the world, idealism is not enough--you need to choose a method which works to achieve the goal. In other words, you need to be "pragmatic." Is the GPL pragmatic? Let's look at its results.

Consider GNU C++. Why do we have a free C++ compiler? Only because the GNU GPL said it had to be free. GNU C++ was developed by an industry consortium, MCC, starting from the GNU C compiler. MCC normally makes its work as proprietary as can be. But they made the C++ front end free software, because the GNU GPL said that was the only way they could release it. The C++ front end included many new files, but since they were meant to be linked with GCC, the GPL did applied to them. The benefit to our community is evident.

Consider GNU Objective C. NeXT initially wanted to make this front end proprietary; they proposed to release it as .o files, and let users link them with the rest of GCC, thinking this might be a way around the GPL's requirements. But our lawyer said that this would not evade the requirements, that it was not allowed. And so they made the Objective C front end free software.

Those examples happened years ago, but the GNU GPL continues to bring us more free software.

Many GNU libraries are covered by the GNU Library General Public License, but not all. One GNU library which is covered by the ordinary GNU GPL is Readline, which implements command-line editing. A month ago, I found out about a non-free program which was designed to use Readline, and told the developer this was not allowed. He could have taken command-line editing out of the program, but what he actually did was rerelease it under the GPL. Now it is free software.

The programmers who write improvements to GCC (or Emacs, or Bash, or Linux, or any GPL-covered program) are often employed by companies or universities. When the programmer wants to return his improvements to the community, and see his code in the next release, the boss may say, "Hold on there--your code belongs to us! We don't want to share it; we have decided to turn your improved version into a proprietary software product."

Here the GNU GPL comes to the rescue. The programmer shows the boss that this proprietary software product would be copyright infringement, and the boss realizes that he has only two choices: release the new code as free software, or not at all. Almost always he lets the programmer do as he intended all along, and the code goes into the next release.

The GNU GPL is not Mr. Nice Guy. It says "no" to some of the things that people sometimes want to do. There are users who say that this is a bad thing--that the GPL "excludes" some proprietary software developers who "need to be brought into the free software community".

But we are not excluding them from our community; they are choosing not to enter. Their decision to make software proprietary is a decision to stay out of our community. Being in our community means joining in cooperation with us; we cannot "bring them into our community" if they don't want to join.

What we can do is offer them an inducement to join. The GNU GPL is designed to make an inducement from our existing software: "If you will make your software free, you can use this code." Of course, it won't win 'em all, but it wins some of the time.

Proprietary software development does not contribute to our community, but its developers often want handouts from us. Free software users can offer free software developers strokes for the ego--recognition and gratitude--but it can be very tempting when a business tells you, "Just let us put your package in our proprietary program, and your program will be used by many thousands of people!" The temptation can be powerful, but in the long run we are all better off if we resist it.

The temptation and pressure are harder to recognize when they come indirectly, through free software organizations that have adopted a policy of catering to proprietary software. The X Consortium (and its successor, the Open Group) offers an example: funded by companies that made proprietary software, they have strived for a decade to persuade programmers not to use copyleft. Now that the Open Group has made X11R6.4 non-free software, those of us who resisted that pressure are glad that we did.

Pragmatically speaking, thinking about greater long-term goals will strengthen your will to resist this pressure. If you focus your mind on the freedom and community that you can build by staying firm, you will find the strength to do it. "Stand for something, or you will fall for nothing."

And if cynics ridicule freedom, ridicule community...if "hard nosed realists" say that profit is the only ideal...just ignore them, and use copyleft all the same.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

B.4 The X Windows Trap

To copyleft or not to copyleft? That is one of the major controversies in the free software community. The idea of copyleft is that we should fight fire with fire--that we should use copyright to make sure our code stays free. The GNU GPL is one example of a copyleft license.

Some free software developers prefer non-copyleft distribution. Non-copyleft licenses such as the XFree86 and BSD licenses are based on the idea of never saying no to anyone--not even to someone who seeks to use your work as the basis for restricting other people. Non-copyleft licensing does nothing wrong, but it misses the opportunity to actively protect our freedom to change and redistribute software. For that, we need copyleft.

For many years, the X Consortium was the chief opponent of copyleft. It exerted both moral suasion and pressure to discourage free software developers from copylefting their programs. It used moral suasion by suggesting that it is not nice to say no. It used pressure through its rule that copylefted software could not be in the X Distribution.

Why did the X Consortium adopt this policy? It had to do with their definition of success. The X Consortium defined success as popularity--specifically, getting computer companies to use X Windows. This definition put the computer companies in the driver's seat. Whatever they wanted, the X Consortium had to help them get it.

Computer companies normally distribute proprietary software. They wanted free software developers to donate their work for such use. If they had asked for this directly, people would have laughed. But the X Consortium, fronting for them, could present this request as an unselfish one. "Join us in donating our work to proprietary software developers," they said, suggesting that this is a noble form of self-sacrifice. "Join us in achieving popularity", they said, suggesting that it was not even a sacrifice.

But self-sacrifice is not the issue: tossing away the defenses of copyleft, which protect the freedom of everyone in the community, is sacrificing more than yourself. Those who granted the X Consortium's request entrusted the community's future to the good will of the X Consortium.

This trust was misplaced. In its last year, the X Consortium made a plan to restrict the forthcoming X11R6.4 release so that it will not be free software. They decided to start saying no, not only to proprietary software developers, but to our community as well.

There is an irony here. If you said yes when the X Consortium asked you not to use copyleft, you put the X Consortium in a position to license and restrict its version of your program, along with its own code.

Te X Consortium did not carry out this plan. Instead it closed down and transferred X development to the Open Group, whose staff are now carrying out a similar plan. To give them credit, when I asked them to release X11R6.4 under the GNU GPL in parallel with their planned restrictive license, they were willing to consider the idea. (They were firmly against staying with the old X11 distribution terms.) Before they said yes or no to this proposal, it had already failed for another reason: the XFree86 group follows the X Consortium's old policy, and will not accept copylefted software.

Even if the X Consortium and the Open Group had never planned to restrict X, someone else could have done it. Non-copylefted software is vulnerable from all directions; it lets anyone make a non-free version dominant, if he will invest sufficient resources to add some important feature using proprietary code. Users who choose software based on technical characteristics, rather than on freedom, could easily be lured to the non-free version for short term convenience.

The X Consortium and Open Group can no longer exert moral suasion by saying that it is wrong to say no. This will make it easier to decide to copyleft your X-related software.

When you work on the core of X, on programs such as the X server, Xlib, and Xt, there is a practical reason not to use copyleft. The XFree86 group does an important job for the community in maintaining these programs, and the benefit of copylefting our changes would be less than the harm done by a fork in development. So it is better to work with the XFree86 group and not copyleft our changes on these programs. Likewise for utilities such as xset and xrdb, which are close to the core of X, and which do not need major improvements. At least we know that the XFree86 group has a firm commitment to developing these programs as free software.

The issue is different for programs outside the core of X: applications, window managers, and additional libraries and widgets. There is no reason not to copyleft them, and we should copyleft them.

In case anyone feels the pressure exerted by the criteria for inclusion in X Distributions, the GNU project will undertake to publicize copylefted packages that work with X. If you would like to copyleft something, and you worry that its omission from X Distributions will impede its popularity, please ask us to help.

At the same time, it is better if we do not feel too much need for popularity. When a businessman tempts you with "more popularity", he may try to convince you that his use of your program is crucial to its success. Don't believe it! If your program is good, it will find many users anyway; you don't need to feel desperate for any particular users, and you will be stronger if you do not. You can get an indescribable sense of joy and freedom by responding, "Take it or leave it--that's no skin off my back." Often the businessman will turn around and accept the program with copyleft, once you call the bluff.

Friends, free software developers, don't repeat a mistake. If we do not copyleft our software, we put its future at the mercy of anyone equipped with more resources than scruples. With copyleft, we can defend freedom, not just for ourselves, but for our whole community.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

B.5 Categories of software

Here is a glossary of various categories of software that are often mentioned in discussions of free software. It explains which categories overlap or are part of other categories.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

B.6 Confusing words

There are a number of words and phrases which we recommend avoiding, either because they are ambiguous or because they imply an opinion that we hope you may not entirely agree with.


[Top] [Contents] [Index] [ ? ]

Footnotes

(1)

GUI is an abbreviation for graphical user interface

(2)

Note that in Emacs parlance a window is not an X window. A frame is an X window. A window is a region within the frame.

(3)

M-x means ALT-x. If you do not have an ALT key, then use ESC x instead.

(4)

Many individuals refer to Microsoft Windows 95 as Win95. In hacker terminology, a win is something that is good. We do not believe that Microsoft Windows 95 is a good operating system, therefore we call it Lose95

(5)

If this sounds surprising, don't forget that there is no ANSI standard for Makefiles


[Top] [Contents] [Index] [ ? ]

Table of Contents


[Top] [Contents] [Index] [ ? ]

Short Table of Contents

Preface
Acknowledgements
Copying
1. Introduction to the GNU build system
2. Writing Good Programs
3. Using GNU Emacs
4. Compiling with Makefiles
5. Using Automake and Autoconf
6. Using Autotools
7. C++ and Autoconf
8. Fortran with Autoconf
9. Maintaining Documentation
A. Legal issues with Free Software
B. Philosophical issues

[Top] [Contents] [Index] [ ? ]

About this document

This document was generated by Marcelo Roberto Jimenez on April, 3 2003 using texi2html

The buttons in the navigation panels have the following meaning:

Button Name Go to From 1.2.3 go to
[ < ] Back previous section in reading order 1.2.2
[ > ] Forward next section in reading order 1.2.4
[ << ] FastBack previous or up-and-previous section 1.1
[ Up ] Up up section 1.2
[ >> ] FastForward next or up-and-next section 1.3
[Top] Top cover (top) of document  
[Contents] Contents table of contents  
[Index] Index concept index  
[ ? ] About this page  

where the Example assumes that the current position is at Subsubsection One-Two-Three of a document of the following structure:

This document was generated by Marcelo Roberto Jimenez on April, 3 2003 using texi2html