Copyright (C) 1995-2006 The University of Melbourne.
Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies.
Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.
Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions.
This guide describes the compilation environment of Mercury — how to build and debug Mercury programs.
This document describes the compilation environment of Mercury. It describes how to use mmc, the Mercury compiler; how to use mmake, the “Mercury make” program, a tool built on top of ordinary or GNU make to simplify the handling of Mercury programs; how to use mdb, the Mercury debugger; and how to use mprof, the Mercury profiler.
We strongly recommend that programmers use mmake rather than invoking mmc directly, because mmake is generally easier to use and avoids unnecessary recompilation.
Mercury source files must be named *.m. Each Mercury source file should contain a single Mercury module whose module name should be the same as the filename without the .m extension.
The Mercury implementation uses a variety of intermediate files, which are described below. But all you really need to know is how to name source files. For historical reasons, the default behaviour is for intermediate files to be created in the current directory, but if you use the --use-subdirs option to mmc or mmake, all these intermediate files will be created in a Mercury subdirectory, where you can happily ignore them. Thus you may wish to skip the rest of this chapter.
In cases where the source file name and module name don't match, the names for intermediate files are based on the name of the module from which they are derived, not on the source file name.
Files ending in .int, .int0, .int2 and .int3 are interface files; these are generated automatically by the compiler, using the --make-interface (or --make-int), --make-private-interface (or --make-priv-int), --make-short-interface (or --make-short-int) options. Files ending in .opt are interface files used in inter-module optimization, and are created using the --make-optimization-interface (or --make-opt-int) option. Similarly, files ending in .trans_opt are interface files used in transitive inter-module optimization, and are created using the --make-transitive-optimization-interface (or --make-trans-opt-int) option.
Since the interface of a module changes less often than its implementation, the .int, .int0, .int2, .int3, .opt, and .trans_opt files will remain unchanged on many compilations. To avoid unnecessary recompilations of the clients of the module, the timestamps on the these files are updated only if their contents change. .date, .date0, .date3, .optdate, and .trans_opt_date files associated with the module are used as timestamp files; they are used when deciding whether the interface files need to be regenerated.
.c_date, .il_date, .java_date, .s_date and .pic_s_date files perform a similar function for .c, .il, .java, .s and .pic_s files respectively. When smart recompilation (see Auxiliary output options) works out that a module does not need to be recompiled, the timestamp file for the target file is updated, and the timestamp of the target file is left unchanged. .used files contain dependency information for smart recompilation (see Auxiliary output options). Files ending in .d are automatically-generated Makefile fragments which contain the dependencies for a module. Files ending in .dep are automatically-generated Makefile fragments which contain the rules for an entire program. Files ending in .dv are automatically-generated Makefile fragments which contain variable definitions for an entire program.
As usual, .c files are C source code, and .o files are object code. In addition, .pic_o files are object code files that contain position-independent code (PIC). .lpic_o files are object code files that can be linked with shared libraries, but don't necessarily contain position-independent code themselves. .mh and .mih files are C header files generated by the Mercury compiler. The non-standard extensions are necessary to avoid conflicts with system header files. .s files and .pic_s files are assembly language. .java, .class and .jar files are Java source code, Java bytecode and Java archives respectively. .il files are Intermediate Language (IL) files for the .NET Common Language Runtime.
Files ending in .rlo are Aditi-RL bytecode files, which are executed by the Aditi deductive database system (see Using Aditi).
Following a long Unix tradition, the Mercury compiler is called mmc (for “Melbourne Mercury Compiler”). Some of its options (e.g. -c, -o, and -I) have a similar meaning to that in other Unix compilers.
Arguments to mmc may be either file names (ending in .m), or module names, with . (rather than __ or :) as the module qualifier. For a module name such as foo.bar.baz, the compiler will look for the source in files foo.bar.baz.m, bar.baz.m, and baz.m, in that order. Note that if the file name does not include all the module qualifiers (e.g. if it is bar.baz.m or baz.m rather than foo.bar.baz.m), then the module name in the :- module declaration for that module must be fully qualified. To make the compiler look in another file for a module, use mmc -f sources-files to generate a mapping from module name to file name, where sources-files is the list of source files in the directory (see Output options).
To compile a program which consists of just a single source file, use the command
mmc filename.m
Unlike traditional Unix compilers, however, mmc will put the executable into a file called filename, not a.out.
For programs that consist of more than one source file, we strongly recommend that you use Mmake (see Using Mmake). Mmake will perform all the steps listed below, using automatic dependency analysis to ensure that things are done in the right order, and that steps are not repeated unnecessarily. If you use Mmake, then you don't need to understand the details of how the Mercury implementation goes about building programs. Thus you may wish to skip the rest of this chapter.
To compile a source file to object code without creating an executable, use the command
mmc -c filename.m
mmc will put the object code into a file called module.o, where module is the name of the Mercury module defined in filename.m. It also will leave the intermediate C code in a file called module.c. If the source file contains nested modules, then each sub-module will get compiled to separate C and object files.
Before you can compile a module, you must make the interface files for the modules that it imports (directly or indirectly). You can create the interface files for one or more source files using the following commands:
mmc --make-short-int filename1.m filename2.m ... mmc --make-priv-int filename1.m filename2.m ... mmc --make-int filename1.m filename2.m ...
If you are going to compile with --intermodule-optimization enabled, then you also need to create the optimization interface files.
mmc --make-opt-int filename1.m filename2.m ...
If you are going to compile with --transitive-intermodule-optimization enabled, then you also need to create the transitive optimization files.
mmc --make-trans-opt filename1.m filename2.m ...
Given that you have made all the interface files, one way to create an executable for a multi-module program is to compile all the modules at the same time using the command
mmc filename1.m filename2.m ...
This will by default put the resulting executable in filename1, but you can use the -o filename option to specify a different name for the output file, if you so desire. The other way to create an executable for a multi-module program is to compile each module separately using mmc -c, and then link the resulting object files together. The linking is a two stage process.
First, you must create and compile an initialization file, which is a C source file containing calls to automatically generated initialization functions contained in the C code of the modules of the program:
c2init module1.c module2.c ... > main-module_init.c, mgnuc -c main-module_init.c
The c2init command line must contain the name of the C file of every module in the program. The order of the arguments is not important. The mgnuc command is the Mercury GNU C compiler; it is a shell script that invokes the GNU C compiler gcc with the options appropriate for compiling the C programs generated by Mercury.
You then link the object code of each module with the object code of the initialization file to yield the executable:
ml -o main-module module1.o module2.o ... main_module_init.o
ml, the Mercury linker, is another shell script that invokes a C compiler with options appropriate for Mercury, this time for linking. ml also pipes any error messages from the linker through mdemangle, the Mercury symbol demangler, so that error messages refer to predicate and function names from the Mercury source code rather than to the names used in the intermediate C code.
The above command puts the executable in the file main-module. The same command line without the -o option would put the executable into the file a.out.
mmc and ml both accept a -v (verbose) option. You can use that option to see what is actually going on. For the full set of options of mmc, see Invocation.
Once you have created an executable for a Mercury program, you can go ahead and execute it. You may however wish to specify certain options to the Mercury runtime system. The Mercury runtime accepts options via the MERCURY_OPTIONS environment variable. The most useful of these are the options that set the size of the stacks. (For the full list of available options, see Environment.)
The det stack and the nondet stack are allocated fixed sizes at program start-up. The default size is 4096k for the det stack and 128k for the nondet stack, but these can be overridden with the --detstack-size and --nondetstack-size options, whose arguments are the desired sizes of the det and nondet stacks respectively, in units of kilobytes. On operating systems that provide the appropriate support, the Mercury runtime will ensure that stack overflow is trapped by the virtual memory system. With conservative garbage collection (the default), the heap will start out with a zero size, and will be dynamically expanded as needed, When not using conservative garbage collection, the heap has a fixed size like the stacks. The default size is 4 Mb, but this can be overridden with the --heap-size option.
Mmake, short for “Mercury Make”, is a tool for building Mercury programs that is built on top of ordinary or GNU Make 1. With Mmake, building even a complicated Mercury program consisting of a number of modules is as simple as
mmc -f source-files mmake main-module.depend mmake main-module
Mmake only recompiles those files that need to be recompiled, based on automatically generated dependency information. Most of the dependencies are stored in .d files that are automatically recomputed every time you recompile, so they are never out-of-date. A little bit of the dependency information is stored in .dep and .dv files which are more expensive to recompute. The mmake main-module.depend command which recreates the main-module.dep and main-module.dv files needs to be repeated only when you add or remove a module from your program, and there is no danger of getting an inconsistent executable if you forget this step — instead you will get a compile or link error.
The mmc -f step above is only required if there are any source files for which the file name does not match the module name. mmc -f generates a file Mercury.modules containing a mapping from module name to source file. The Mercury.modules file must be updated when a source file for which the file name does not match the module name is added to or removed from the directory.
mmake allows you to build more than one program in the same directory. Each program must have its own .dep and .dv files, and therefore you must run mmake program.depend for each program. The Mercury.modules file is used for all programs in the directory.
If there is a file called Mmake or Mmakefile in the current directory, Mmake will include that file in its automatically-generated Makefile. The Mmake file can override the default values of various variables used by Mmake's builtin rules, or it can add additional rules, dependencies, and actions.
Mmake's builtin rules are defined by the file prefix/lib/mercury/mmake/Mmake.rules (where prefix is /usr/local/mercury-version by default, and version is the version number, e.g. 0.6), as well as the rules and variables in the automatically-generated .dep and .dv files. These rules define the following targets:
LIBGRADES
variable. It will also build and install the
necessary interface files. The variable INSTALL
specifies
the name of the command to use to install each file, by default
cp. The variable INSTALL_MKDIR
specifies the command to use
to create directories, by default mkdir -p.
For more information, see Installing libraries.
The variables used by the builtin rules (and their default values) are defined in the file prefix/lib/mercury/mmake/Mmake.vars, however these may be overridden by user Mmake files. Some of the more useful variables are:
MAIN_TARGET
MC
GRADEFLAGS and EXTRA_GRADEFLAGS
mmc
, mgnuc
, ml
, and c2init
).
MCFLAGS and EXTRA_MCFLAGS
GRADEFLAGS
, not in MCFLAGS
.)
MGNUC
MGNUCFLAGS and EXTRA_MGNUCFLAGS
CFLAGS and EXTRA_CFLAGS
JAVACFLAGS and EXTRA_JAVACFLAGS
MS_CLFLAGS and EXTRA_MS_CLFLAGS
MS_CL_NOASM
ML
LINKAGE
MERCURY_LINKAGE
MLFLAGS and EXTRA_MLFLAGS
GRADEFLAGS
, not in MLFLAGS
.)
LDFLAGS and EXTRA_LDFLAGS
ml --print-link-command
to find out
what command is used, usually the C compiler).
LD_LIBFLAGS and EXTRA_LD_LIBFLAGS
ml --print-shared-lib-link-command
to find out what command is used, usually the C compiler
or the system linker, depending on the platform).
MLLIBS and EXTRA_MLLIBS
MLOBJS and EXTRA_MLOBJS
C2INITFLAGS and EXTRA_C2INITFLAGS
C2INITFLAGS
and EXTRA_C2INITFLAGS
are obsolete synonyms
for MLFLAGS
and EXTRA_MLFLAGS
(ml
and c2init
take the same set of options).
(Note that compilation model options and extra files to be processed by
c2init should not be specified in C2INITFLAGS
— they should be
specified in GRADEFLAGS
and C2INITARGS
, respectively.)
C2INITARGS and EXTRA_C2INITARGS
MLFLAGS
) since they are also used to derive extra dependency
information.
EXTRA_LIBRARIES
EXTRA_LIB_DIRS
INSTALL_PREFIX
INSTALL
INSTALL_MKDIR
LIBGRADES
GRADEFLAGS
settings will also be applied when
the library is built in each of the listed grades, so you may not get what
you expect if those options are not subsumed by each of the grades listed.
Other variables also exist — see prefix/lib/mercury/mmake/Mmake.vars for a complete list.
If you wish to temporarily change the flags passed to an executable, rather than setting the various FLAGS variables directly, you can set an EXTRA_ variable. This is particularly intended for use where a shell script needs to call mmake and add an extra parameter, without interfering with the flag settings in the Mmakefile.
For each of the variables for which there is version with an EXTRA_ prefix, there is also a version with an ALL_ prefix that is defined to include both the ordinary and the EXTRA_ version. If you wish to use the values any of these variables in your Mmakefile (as opposed to setting the values), then you should use the ALL_ version.
It is also possible to override these variables on a per-file basis.
For example, if you have a module called say bad_style.m
which triggers lots of compiler warnings, and you want to disable
the warnings just for that file, but keep them for all the other modules,
then you can override MCFLAGS
just for that file. This is done by
setting the variable MCFLAGS-bad_style, as shown here:
MCFLAGS-bad_style = --inhibit-warnings
Mmake has a few options, including --use-subdirs, --use-mmc-make, --save-makefile, --verbose, and --no-warn-undefined-vars. For details about these options, see the man page or type mmake --help.
Finally, since Mmake is built on top of Make or GNU Make, you can also make use of the features and options supported by the underlying Make. In particular, GNU Make has support for running jobs in parallel, which is very useful if you have a machine with more than one CPU.
As an alternative to Mmake, the Mercury compiler now contains a significant part of the functionality of Mmake, using mmc's --make option. The advantages of the mmc --make over Mmake are that there is no mmake depend step and the dependencies are more accurate. Parallel builds are not yet supported.
Note that --use-subdirs is automatically enabled if you specify mmc --make.
The Mmake variables above can be used by mmc --make if they are set in a file called Mercury.options. The Mercury.options file has the same syntax as an Mmakefile, but only variable assignments and include directives are allowed. All variables in Mercury.options are treated as if they are assigned using :=. Variables may also be set in the environment, overriding settings in options files.
mmc --make can be used in conjunction with Mmake. This is useful for projects which include source code written in languages other than Mercury. The --use-mmc-make Mmake option disables Mmake's Mercury-specific rules. Mmake will then process source files written in other languages, but all Mercury compilation will be done by mmc --make. The following variables can be set in the Mmakefile to control the use of mmc --make.
MERCURY_MAIN_MODULES
MC_BUILD_FILES
MC_MAKE_FLAGS and EXTRA_MC_MAKE_FLAGS
Often you will want to use a particular set of Mercury modules in more than one program. The Mercury implementation includes support for developing libraries, i.e. sets of Mercury modules intended for reuse. It allows separate compilation of libraries and, on many platforms, it supports shared object libraries.
A Mercury library is identified by a top-level module, which should contain all of the modules in that library as sub-modules. It may be as simple as this mypackage.m file:
:- module mypackage. :- interface. :- include_module foo, bar, baz.
This defines a module mypackage containing sub-modules mypackage:foo, mypackage:bar, and mypackage:baz.
It is also possible to build libraries of unrelated modules, so long as the top-level module imports all the necessary modules. For example:
:- module blah. :- import_module fee, fie, foe, fum.
This example defines a module blah, which has no functionality of its own, and which is just used for grouping the unrelated modules fee, fie, foe, and fum.
Generally it is better style for each library to consist of a single module which encapsulates its sub-modules, as in the first example, rather than just a group of unrelated modules, as in the second example.
Generally Mmake will do most of the work of building
libraries automatically. Here's a sample Mmakefile
for
creating a library.
MAIN_TARGET = libmypackage depend: mypackage.depend
The Mmake target libfoo is a built-in target for creating a library whose top-level module is foo.m. The automatically generated Mmake rules for the target libfoo will create all the files needed to use the library. (You will need to run mmake foo.depend first to generate the module dependency information.)
Mmake will create static (non-shared) object libraries and, on most platforms, shared object libraries; however, we do not yet support the creation of dynamic link libraries (DLLs) on Windows. Static libraries are created using the standard tools ar and ranlib. Shared libraries are created using the --make-shared-lib option to ml. The automatically-generated Make rules for libmypackage will look something like this:
libmypackage: libmypackage.a libmypackage.so \ $(mypackage.ints) $(mypackage.int3s) \ $(mypackage.opts) $(mypackage.trans_opts) mypackage.init libmypackage.a: $(mypackage.os) rm -f libmypackage.a $(AR) $(ARFLAGS) libmypackage.a $(mypackage.os) $(MLOBJS) $(RANLIB) $(RANLIBFLAGS) mypackage.a libmypackage.so: $(mypackage.pic_os) $(ML) $(MLFLAGS) --make-shared-lib -o libmypackage.so \ $(mypackage.pic_os) $(MLPICOBJS) $(MLLIBS) libmypackage.init: ... clean: rm -f libmypackage.a libmypackage.so
If necessary, you can override the default definitions of the variables such as ML, MLFLAGS, MLPICOBJS, and MLLIBS to customize the way shared libraries are built. Similarly AR, ARFLAGS, MLOBJS, RANLIB, and RANLIBFLAGS control the way static libraries are built. (The MLOBJS variable is supposed to contain a list of additional object files to link into the library, while the MLLIBS variable should contain a list of -l options naming other libraries used by this library. MLPICOBJS is described below.)
Note that to use a library, as well as the shared or static object library, you also need the interface files. That's why the libmypackage target builds $(mypackage.ints) and $(mypackage.int3s). If the people using the library are going to use intermodule optimization, you will also need the intermodule optimization interfaces. The libmypackage target will build $(mypackage.opts) if --intermodule-optimization is specified in your MCFLAGS variable (this is recommended). Similarly, if the people using the library are going to use transitive intermodule optimization, you will also need the transitive intermodule optimization interfaces ($(mypackage.trans_opt)). These will be built if --trans-intermod-opt is specified in your MCFLAGS variable. In addition, with certain compilation grades, programs will need to execute some startup code to initialize the library; the mypackage.init file contains information about initialization code for the library. The libmypackage target will build this file.
On some platforms, shared objects must be created using position independent
code (PIC), which requires passing some special options to the C compiler.
On these platforms, Mmake
will create .pic_o files,
and $(mypackage.pic_os) will contain a list of the .pic_o files
for the library whose top-level module is mypackage.
In addition, $(MLPICOBJS) will be set to $MLOBJS with
all occurrences of .o replaced with .pic_o.
On other platforms, position independent code is the default,
so $(mypackage.pic_os) will just be the same as $(mypackage.os),
which contains a list of the .o files for that module,
and $(MLPICOBJS) will be the same as $(MLOBJS).
mmake has support for alternative library directory hierarchies. These have the same structure as the prefix/lib/mercury tree, including the different subdirectories for different grades and different machine architectures.
In order to support the installation of a library into such a tree, you simply need to specify (e.g. in your Mmakefile) the path prefix and the list of grades to install:
INSTALL_PREFIX = /my/install/dir LIBGRADES = asm_fast asm_fast.gc.tr.debug
This specifies that libraries should be installed in /my/install/dir/lib/mercury, in the default grade plus asm_fast and asm_fast.gc.tr.debug. If INSTALL_PREFIX is not specified, mmake will attempt to install the library in the same place as the standard Mercury libraries. If LIBGRADES is not specified, mmake will use the Mercury compiler's default set of grades, which may or may not correspond to the actual set of grades in which the standard Mercury libraries were installed.
To actually install a library libfoo, use the mmake target libfoo.install. This also installs all the needed interface files, and (if intermodule optimisation is enabled) the relevant intermodule optimisation files.
One can override the list of grades to install for a given library libfoo by setting the LIBGRADES-foo variable, or add to it by setting EXTRA_LIBGRADES-foo.
The command used to install each file is specified by INSTALL. If INSTALL is not specified, cp will be used. The command used to create directories is specified by INSTALL_MKDIR. If INSTALL_MKDIR is not specified, mkdir -p will be used. Note that currently it is not possible to set the installation prefix on a library-by-library basis.
Once a library is installed, using it is easy. Suppose the user wishes to use the library mypackage (installed in the tree rooted at /some/directory/mypackage) and the library myotherlib (installed in the tree rooted at /some/directory/myotherlib). The user need only set the following Mmake variables:
EXTRA_LIB_DIRS = /some/directory/mypackage/lib/mercury \ /some/directory/myotherlib/lib/mercury EXTRA_LIBRARIES = mypackage myotherlib
When using --intermodule-optimization with a library which uses the C interface, it may be necessary to add -I options to MGNUCFLAGS so that the C compiler can find any header files used by the library's C code.
Mmake will ensure that the appropriate directories are searched for the relevant interface files, module initialisation files, compiled libraries, etc.
To use a library when invoking mmc directly, use the --mld and --ml options (see Link options). You can also specify whether to link executables with the shared or static versions of Mercury libraries using --mercury-linkage shared or --mercury-linkage static (shared libraries are always linked with the shared versions of libraries).
Beware that the directory name that you must use in EXTRA_LIB_DIRS or as the argument of the --mld option is not quite the same as the name that was specified in the INSTALL_PREFIX when the library was installed — the name needs to have /lib/mercury appended.
One can specify extra libraries to be used on a program-by-program basis. For instance, if the program foo also uses the library mylib4foo, but the other programs governed by the Mmakefile don't, then one can declare:
EXTRA_LIBRARIES-foo = mylib4foo
Libraries are handled a little differently for the Java grade. Instead of compiling object code into a static or shared library, the class files are added to a jar (Java ARchive) file of the form library-name.jar.
To create or install a Java library, simply specify that you want to use the java grade, either by setting GRADE=java in your Mmakefile, or by including --java or --grade java in your GRADEFLAGS, then follow the instructions as above.
Java libraries are installed to the directory prefix/lib/mercury/lib/java. To include them in a program, in addition to the instructions above, you will need to include the installed jar file in your CLASSPATH, which you can set using --java-classpath jarfile in MCFLAGS.
This section gives a quick and simple guide to getting started with the debugger. The remainder of this chapter contains more detailed documentation.
To use the debugger, you must first compile your program with debugging enabled. You can do this by using one of the --debug or --decl-debug options when invoking mmc, or by including GRADEFLAGS = --debug or GRADEFLAGS = --decl-debug in your Mmakefile.
bash$ mmc --debug hello.m
Once you've compiled with debugging enabled, you can use the mdb command to invoke your program under the debugger:
bash$ mdb ./hello arg1 arg2 ...
Any arguments (such as arg1 arg2 ... in this example) that you pass after the program name will be given as arguments to the program.
The debugger will print a start-up message
and will then show you the first trace event,
namely the call to main/2
:
1: 1 1 CALL pred hello:main/2-0 (det) hello.m:13 mdb>
By hitting enter at the mdb> prompt, you can step through the execution of your program to the next trace event:
2: 2 2 CALL pred io:write_string/3-0 (det) io.m:2837 (hello.m:14) mdb> Hello, world 3: 2 2 EXIT pred io:write_string/3-0 (det) io.m:2837 (hello.m:14) mdb>
For each trace event, the debugger prints out several pieces of information. The three numbers at the start of the display are the event number, the call sequence number, and the call depth. (You don't really need to pay too much attention to those.) They are followed by the event type (e.g. CALL or EXIT). After that comes the identification of the procedure in which the event occurred, consisting of the module-qualified name of the predicate or function to which the procedure belongs, followed by its arity, mode number and determinism. This may sometimes be followed by a “path” (see Tracing of Mercury programs). At the end is the file name and line number of the called procedure and (if available) also the file name and line number of the call.
The most useful mdb
commands have single-letter abbreviations.
The alias command will show these abbreviations:
mdb> alias ? => help EMPTY => step NUMBER => step P => print * b => break c => continue d => stack f => finish g => goto h => help p => print r => retry s => step v => vars
The P or print * command will display the values of any live variables in scope. The f or finish command can be used if you want to skip over a call. The b or break command can be used to set break-points. The d or stack command will display the call stack. The quit command will exit the debugger.
That should be enough to get you started. But if you have GNU Emacs installed, you should strongly consider using the Emacs interface to mdb — see the following section.
For more information about the available commands, use the ? or help command, or see Debugger commands.
As well as the command-line debugger, mdb, there is also an Emacs interface to this debugger. Note that the Emacs interface only works with GNU Emacs, not with XEmacs.
With the Emacs interface, the debugger will display your source code as you trace through it, marking the line that is currently being executed, and allowing you to easily set breakpoints on particular lines in your source code. You can have separate windows for the debugger prompt, the source code being executed, and for the output of the program being executed. In addition, most of the mdb commands are accessible via menus.
To start the Emacs interface, you first need to put the following text in the file .emacs in your home directory, replacing “/usr/local/mercury-1.0” with the directory that your Mercury implementation was installed in.
(setq load-path (cons (expand-file-name "/usr/local/mercury-1.0/lib/mercury/elisp") load-path)) (autoload 'mdb "gud" "Invoke the Mercury debugger" t)
Build your program with debugging enabled, as described in Quick overview or Preparing a program for debugging. Then start up Emacs, e.g. using the command emacs, and type M-x mdb <RET>. Emacs will then prompt you for the mdb command to invoke
Run mdb (like this): mdb
and you should type in the name of the program that you want to debug and any arguments that you want to pass to it:
Run mdb (like this): mdb ./hello arg1 arg2 ...
Emacs will then create several “buffers”: one for the debugger prompt, one for the input and output of the program being executed, and one or more for the source files. By default, Emacs will split the display into two parts, called “windows”, so that two of these buffers will be visible. You can use the command C-x o to switch between windows, and you can use the command C-x 2 to split a window into two windows. You can use the “Buffers” menu to select which buffer is displayed in each window.
If you're using X-Windows, then it is a good idea to set the Emacs variable pop-up-frames to t before starting mdb, since this will cause each buffer to be displayed in a new “frame” (i.e. a new X window). You can set this variable interactively using the set-variable command, i.e. M-x set-variable <RET> pop-up-frames <RET> t <RET>. Or you can put (setq pop-up-frames t) in the .emacs file in your home directory.
For more information on buffers, windows, and frames, see the Emacs documentation.
Another useful Emacs variable is gud-mdb-directories. This specifies the list of directories to search for source files. You can use a command such as
M-x set-variable <RET> gud-mdb-directories <RET> (list "/foo/bar" "../other" "/home/guest") <RET>
to set it interactively, or you can put a command like
(setq gud-mdb-directories (list "/foo/bar" "../other" "/home/guest"))
in your .emacs file.
At each trace event, the debugger will search for the source file corresponding to that event, first in the same directory as the program, and then in the directories specified by the gud-mdb-directories variable. It will display the source file, with the line number corresponding to that trace event marked by an arrow (=>) at the start of the line.
Several of the debugger features can be accessed by moving the cursor to the relevant part of the source code and then selecting a command from the menu. You can set a break point on a line by moving the cursor to the appropriate line in your source code (e.g. with the arrow keys, or by clicking the mouse there), and then selecting the “Set breakpoint on line” command from the “Breakpoints” sub-menu of the “MDB” menu. You can set a breakpoint on a procedure by moving the cursor over the procedure name and then selecting the “Set breakpoint on procedure” command from the same menu. And you can display the value of a variable by moving the cursor over the variable name and then selecting the “Print variable” command from the “Data browsing” sub-menu of the “MDB” menu. Most of the menu commands also have keyboard short-cuts, which are displayed on the menu.
Note that mdb's context command should not be used if you are using the Emacs interface, otherwise the Emacs interface won't be able to parse the file names and line numbers that mdb outputs, and so it won't be able to highlight the correct location in the source code.
The Mercury debugger is based on a modified version of the box model on which the four-port debuggers of most Prolog systems are based. Such debuggers abstract the execution of a program into a sequence, also called a trace, of execution events of various kinds. The four kinds of events supported by most Prolog systems (their ports) are
Mercury also supports these four kinds of events, but not all events can occur for every procedure call. Which events can occur for a procedure call, and in what order, depend on the determinism of the procedure. The possible event sequences for procedures of the various determinisms are as follows.
In addition to these four event types, Mercury supports exception events. An exception event occurs when an exception has been thrown inside a procedure, and control is about to propagate this exception to the caller. An exception event can replace the final exit or fail event in the event sequences above or, in the case of erroneous procedures, can come after the call event.
Besides the event types call, exit, redo, fail and exception, which describe the interface of a call, Mercury also supports several types of events that report on what is happening internal to a call. Each of these internal event types has an associated parameter called a path. The internal event types are:
A path is a sequence of path components separated by semicolons. Each path component is one of the following:
c
numd
nums
num?
t
e
~
q
A path describes the position of a goal inside the body of a procedure definition. For example, if the procedure body is a disjunction in which each disjunct is a conjunction, then the path d2;c3; denotes the third conjunct within the second disjunct. If the third conjunct within the second disjunct is an atomic goal such as a call or a unification, then this will be the only goal with whose path has d2;c3; as a prefix. If it is a compound goal, then its components will all have paths that have d2;c3; as a prefix, e.g. if it is an if-then-else, then its three components will have the paths d2;c3;?;, d2;c3;t; and d2;c3;e;.
Paths refer to the internal form of the procedure definition. When debugging is enabled (and the option --trace-optimized is not given), the compiler will try to keep this form as close as possible to the source form of the procedure, in order to make event paths as useful as possible to the programmer. Due to the compiler's flattening of terms, and its introduction of extra unifications to implement calls in implied modes, the number of conjuncts in a conjunction will frequently differ between the source and internal form of a procedure. This is rarely a problem, however, as long as you know about it. Mode reordering can be a bit more of a problem, but it can be avoided by writing single-mode predicates and functions so that producers come before consumers. The compiler transformation that potentially causes the most trouble in the interpretation of goal paths is the conversion of disjunctions into switches. In most cases, a disjunction is transformed into a single switch, and it is usually easy to guess, just from the events within a switch arm, just which disjunct the switch arm corresponds to. Some cases are more complex; for example, it is possible for a single disjunction can be transformed into several switches, possibly with other, smaller disjunctions inside them. In such cases, making sense of goal paths may require a look at the internal form of the procedure. You can ask the compiler to generate a file with the internal forms of the procedures in a given module by including the options -dfinal -Dpaths on the command line when compiling that module.
When you compile a Mercury program, you can specify whether you want to be able to run the Mercury debugger on the program or not. If you do, the compiler embeds calls to the Mercury debugging system into the executable code of the program, at the execution points that represent trace events. At each event, the debugging system decides whether to give control back to the executable immediately, or whether to first give control to you, allowing you to examine the state of the computation and issue commands.
Mercury supports two broad ways of preparing a program for debugging.
The simpler way is to compile a program in a debugging grade,
which you can do directly by specifying a grade
that includes the word “debug” or “decldebug”
(e.g. asm_fast.gc.debug, or asm_fast.gc.decldebug),
or indirectly by specifying one of the --debug or --decl-debug
grade options to the compiler, linker, and other tools
(in particular mmc
, mgnuc
, ml
, and c2init
).
If you follow this way,
and accept the default settings of the various compiler options
that control the selection of trace events (which are described below),
you will be assured of being able to get control
at every execution point that represents a potential trace event,
which is very convenient.
The “decldebug” grades improve declarative debugging by tracking the source of marked subterms (see Improving the search). Doing this substantially increases the size of executables so these grades should only be used when the subterm dependency tracking feature of the declarative debugger is required. Note that declarative debugging, with the exception of the subterm dependency tracking features, also works in the .debug grades.
The two drawbacks of using a debugging grade are the large size of the resulting executables, and the fact that often you discover that you need to debug a big program only after having built it in a non-debugging grade. This is why Mercury also supports another way to prepare a program for debugging, one that does not require the use of a debugging grade. With this way, you can decide, individually for each module, which of four trace levels, none, shallow, deep, and rep you want to compile them with:
The intended uses of these trace levels are as follows.
In general, it is a good idea for most or all modules that can be called from modules compiled with trace level deep or rep to be compiled with at least trace level shallow.
You can control what trace level a module is compiled with by giving one of the following compiler options:
As the name implies, the last alternative is the default, which is why by default you get no debugging capability in non-debugging grades and full debugging capability in debugging grades. The table also shows that in a debugging grade, no module can be compiled with trace level none.
Important note: If you are not using a debugging grade, but you compile some modules with a trace level other than none, then you must also pass the --trace (or -t) option to c2init and to the Mercury linker. If you're using Mmake, then you can do this by including --trace in the MLFLAGS variable.
If you're using Mmake, then you can also set the compilation options for a single module named Module by setting the Mmake variable MCFLAGS-Module. For example, to compile the file foo.m with deep tracing, bar.m with shallow tracing, and everything else with no tracing, you could use the following:
MLFLAGS = --trace MCFLAGS-foo = --trace deep MCFLAGS-bar = --trace shallow
By default, all trace levels other than none turn off all compiler optimizations that can affect the sequence of trace events generated by the program, such as inlining. If you are specifically interested in how the compiler's optimizations affect the trace event sequence, you can specify the option --trace-optimized, which tells the compiler that it does not have to disable those optimizations. (A small number of low-level optimizations have not yet been enhanced to work properly in the presence of tracing, so compiler disables these even if --trace-optimized is given.)
The executables of Mercury programs by default do not invoke the Mercury debugger even if some or all of their modules were compiled with some form of tracing, and even if the grade of the executable is a debugging grade, This is similar to the behaviour of executables created by the implementations of other languages; for example the executable of a C program compiled with -g does not automatically invoke gdb or dbx etc when it is executed.
Unlike those other language implementations, when you invoke the Mercury debugger mdb, you invoke it not just with the name of an executable but with the command line you want to debug. If something goes wrong when you execute the command
prog arg1 arg2 ...
and you want to find the cause of the problem, you must execute the command
mdb prog arg1 arg2 ...
because you do not get a chance to specify the command line of the program later.
When the debugger starts up, as part of its initialization it executes commands from the following three sources, in order:
The operation of the Mercury debugger mdb is based on the following concepts.
The effect of a break point depends on the state of the break point.
Neither of these will happen if the break point is disabled.
Every break point has a print list. Every time execution stops at an event that matches the breakpoint, mdb implicitly executes a print command for each element in the breakpoint's print list. A print list element can be the word goal, which causes the goal to the printed as if by print goal; it can be the word *, which causes all the variables to the printed as if by print *; or it can be the name or number of a variable, possibly followed (without white space) by term path, which causes the specified variable or part thereof to the printed as if the element were given as an argument to the print command.
If the debugger receives an interrupt (e.g. if the user presses control-C), it will stop at the next event regardless of what command it is executing at the time.
Regardless of the print level, the debugger will print any event that causes execution to stop and user interaction to start.
In Mercury, predicates that want to do I/O must take a di/uo pair of I/O state arguments. Some of these predicates call other predicates to do I/O for them, but some are I/O primitives, i.e. they perform the I/O themselves. The Mercury standard library provides a large set of these primitives, and programmers can write their own through the foreign language interface. An I/O action is the execution of one call to an I/O primitive.
In debugging grades, the Mercury implementation has the ability to automatically record, for every I/O action, the identity of the I/O primitive involved in the action and the values of all its arguments. The size of the table storing this information is proportional to the number of tabled I/O actions, which are the I/O actions whose details are entered into the table. Therefore the tabling of I/O actions is never turned on automatically; instead, users must ask for I/O tabling to start with the table_io start command in mdb.
The purpose of I/O tabling is to enable transparent retries across I/O actions. (The mdb retry command restores the computation to a state it had earlier, allowing the programmer to explore code that the program has already executed; see its documentation in the Debugger commands section below.) In the absence of I/O tabling, retries across I/O actions can have bad consequences. Retry of a goal that reads some input requires that input to be provided twice; retry of a goal that writes some output generates duplicate output. Retry of a goal that opens a file leads to a file descriptor leak; retry of a goal that closes a file can lead to errors (duplicate closes, reads from and writes to closed files).
I/O tabling avoids these problems by making I/O primitives idempotent. This means that they will generate their desired effect when they are first executed, but reexecuting them after a retry won't have any further effect. The Mercury implementation achieves this by looking up the action (which is identified by a I/O action number) in the table and returning the output arguments stored in the table for the given action without executing the code of the primitive.
Starting I/O tabling when the program starts execution and leaving it enabled for the entire program run will work well for program runs that don't do lots of I/O. For program runs that do lots of I/O, the table can fill up all available memory. In such cases, the programmer may enable I/O tabling with table_io start just before the program enters the part they wish to debug and in which they wish to be able to perform transparent retries across I/O actions, and turn it off with table_io stop after execution leaves that part.
The commands table_io start and table_io stop can each be given only once during an mdb session. They divide the execution of the program into three phases: before table_io start, between table_io start and table_io stop, and after table_io stop. Retries across I/O will be transparent only in the middle phase.
When the debugger (as opposed to the program being debugged) is interacting with the user, the debugger prints a prompt and reads in a line of text, which it will interpret as its next command line. A command line consists of a single command, or several commands separated by semicolons. Each command consists of several words separated by white space. The first word is the name of the command, while any other words give options and/or parameters to the command.
A word may itself contain semicolons or whitespace if it is enclosed in single quotes ('). This is useful for commands that have other commands as parameters, for example view -w 'xterm -e'. Characters that have special meaning to mdb will be treated like ordinary characters if they are escaped with a backslash (\). It is possible to escape single quotes, whitespace, semicolons, newlines and the escape character itself.
Some commands take a number as their first parameter. For such commands, users can type `number command' as well as `command number'. The debugger will treat the former as the latter, even if the number and the command are not separated by white space.
query
module1 module2 ...
cc_query
module1 module2 ...
io_query
module1 module2 ...
These commands allow you to type in queries (goals) interactively in the debugger. When you use one of these commands, the debugger will respond with a query prompt (?- or run <--), at which you can type in a goal; the debugger will then compile and execute the goal and display the answer(s). You can return from the query prompt to the mdb> prompt by typing the end-of-file indicator (typically control-D or control-Z), or by typing quit..
The module names module1, module2, ... specify which modules will be imported. Note that you can also add new modules to the list of imports directly at the query prompt, by using a command of the form [module], e.g. [int]. You need to import all the modules that define symbols used in your query. Queries can only use symbols that are exported from a module; entities which are declared in a module's implementation section only cannot be used.
The three variants differ in what kind of goals they allow. For goals which perform I/O, you need to use io_query; this lets you type in the goal using DCG syntax. For goals which don't do I/O, but which have determinism cc_nondet or cc_multi, you need to use cc_query; this finds only one solution to the specified goal. For all other goals, you can use plain query, which finds all the solutions to the goal.
For query and cc_query, the debugger will print out all the variables in the goal using io.write. The goal must bind all of its variables to ground terms, otherwise you will get a mode error.
The current implementation works by compiling the queries on-the-fly and then dynamically linking them into the program being debugged. Thus it may take a little while for your query to be executed. Each query will be written to a file named mdb_query.m in the current directory, so make sure you don't name your source file mdb_query.m. Note that dynamic linking may not be supported on some systems; if you are using a system for which dynamic linking is not supported, you will get an error message when you try to run these commands.
You may also need to build your program using shared libraries for interactive queries to work. With Linux on the Intel x86 architecture, the default is for executables to be statically linked, which means that dynamic linking won't work, and hence interactive queries won't work either (the error message is rather obscure: the dynamic linker complains about the symbol __data_start being undefined). To build with shared libraries, you can use MGNUCFLAGS=--pic-reg and MLFLAGS=--shared in your Mmakefile. See the README.Linux file in the Mercury distribution for more details.
step [-NSans] [
num]
The options -n or --none, -s or --some, -a or --all specify the print level to use for the duration of the command, while the options -S or --strict and -N or --nostrict specify the strictness of the command.
By default, this command is not strict, and it uses the default print level.
A command line containing only a number num is interpreted as if it were `step num'.
An empty command line is interpreted as `step 1'.
goto [-NSans]
numThe options -n or --none, -s or --some, -a or --all specify the print level to use for the duration of the command, while the options -S or --strict and -N or --nostrict specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
next [-NSans] [
num]
The options -n or --none, -s or --some, -a or --all specify the print level to use for the duration of the command, while the options -S or --strict and -N or --nostrict specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
finish [-NSans] [
num]
The options -n or --none, -s or --some, -a or --all specify the print level to use for the duration of the command, while the options -S or --strict and -N or --nostrict specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
exception [-NSans]
The options -n or --none, -s or --some, -a or --all specify the print level to use for the duration of the command, while the options -S or --strict and -N or --nostrict specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
return [-NSans]
The options -n or --none, -s or --some, -a or --all specify the print level to use for the duration of the command, while the options -S or --strict and -N or --nostrict specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
forward [-NSans]
The options -n or --none, -s or --some, -a or --all specify the print level to use for the duration of the command, while the options -S or --strict and -N or --nostrict specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
mindepth [-NSans]
depthThe options -n or --none, -s or --some, -a or --all specify the print level to use for the duration of the command, while the options -S or --strict and -N or --nostrict specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
maxdepth [-NSans]
depthThe options -n or --none, -s or --some, -a or --all specify the print level to use for the duration of the command, while the options -S or --strict and -N or --nostrict specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
continue [-NSans]
The options -n or --none, -s or --some, -a or --all specify the print level to use for the duration of the command, while the options -S or --strict and -N or --nostrict specify the strictness of the command.
By default, this command is not strict. The print level used by the command by default depends on the final strictness level: if the command is strict, it is none, otherwise it is some.
retry [-fio] [
num]
The command will report an error unless the values of all the input arguments of the selected call are available at the return site at which control would reenter the selected call. (The compiler will keep the values of the input arguments of traced predicates as long as possible, but it cannot keep them beyond the point where they are destructively updated.) The exception is values of type `io.state'; the debugger can perform a retry if the only missing value is of type `io.state' (there can be only one io.state at any given time).
Retries over I/O actions are guaranteed to be safe only if the events at which the retry starts and ends are both within the I/O tabled region of the program's execution. If the retry is not guaranteed to be safe, the debugger will normally ask the user if they really want to do this. The option -f or --force suppresses the question, telling the debugger that retrying over I/O is OK; the option -o or --only-if-safe suppresses the question, telling the debugger that retrying over I/O is not OK; the option -i or --interactive restores the question if a previous option suppressed it.
vars
print [-fpv]
name[
termpath]
print [-fpv]
num[
termpath]
The options -f or --flat, -p or --pretty, and -v or --verbose specify the format to use for printing.
print [-fpv] *
The options -f or --flat, -p or --pretty, and -v or --verbose specify the format to use for printing.
print [-fpv]
print [-fpv] goal
The options -f or --flat, -p or --pretty, and -v or --verbose specify the format to use for printing.
print [-fpv] exception
The options -f or --flat, -p or --pretty, and -v or --verbose specify the format to use for printing.
print [-fpv] action
numThe options -f or --flat, -p or --pretty, and -v or --verbose specify the format to use for printing.
browse [-fpvx]
name[
termpath]
browse [-fpvx]
num[
termpath]
The interactive term browser allows you to selectively examine particular subterms. The depth and size of printed terms may be controlled. The displayed terms may also be clipped to fit within a single screen.
The options -f or --flat, -p or --pretty, and -v or --verbose specify the format to use for browsing. The -x or --xml option tells mdb to dump the value of the variable to an XML file and then invoke an XML browser on the file. The XML filename as well as the command to invoke the XML browser can be set using the set command. See the documentation for set for more details.
For further documentation on the interactive term browser, invoke the browse command from within mdb and then type help at the browser> prompt.
browse [-fpvx]
browse [-fpvx] goal
The options -f or --flat, -p or --pretty, and -v or --verbose specify the format to use for browsing. The -x or --xml option tells mdb to dump the goal to an XML file and then invoke an XML browser on the file. The XML filename as well as the command to invoke the XML browser can be set using the set command. See the documentation for set for more details.
browse [-fpvx] exception
The options -f or --flat, -p or --pretty, and -v or --verbose specify the format to use for browsing. The -x or --xml option tells mdb to dump the exception to an XML file and then invoke an XML browser on the file. The XML filename as well as the command to invoke the XML browser can be set using the set command. See the documentation for set for more details.
browse [-fpvx] action
numThe options -f or --flat, -p or --pretty, and -v or --verbose specify the format to use for browsing. The -x or --xml option tells mdb to dump the io action representation to an XML file and then invoke an XML browser on the file. The XML filename as well as the command to invoke the XML browser can be set using the set command. See the documentation for set for more details.
stack [-d] [-f
numframes] [
numlines]
The option -d or --detailed specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.
If the -f option, if present, specifies that only the topmost numframes stack frames should be printed.
The optional number numlines, if present, specifies that only the topmost numlines lines should be printed.
This command will report an error if there is no stack trace information available about any ancestor.
up [-d] [
num]
If num is not specified, the default value is one.
This command will report an error if the current environment doesn't have the required number of ancestors, or if there is no execution trace information about the requested ancestor, or if there is no stack trace information about any of the ancestors between the current environment and the requested ancestor.
The option -d or --detailed specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.
down [-d] [
num]
If num is not specified, the default value is one.
This command will report an error if there is no execution trace information about the requested descendant.
The option -d or --detailed specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.
level [-d] [
num]
This command will report an error if the current environment doesn't have the required number of ancestors, or if there is no execution trace information about the requested ancestor, or if there is no stack trace information about any of the ancestors between the current environment and the requested ancestor.
The option -d or --detailed specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.
current
set [-APBfpv]
param valueYou can use the apostrophe character (') to quote the command string when using the set command, for example "set xml_browser_cmd 'firefox file:///tmp/mdbtmp.xml'".
The browser maintains separate configuration parameters for the three commands print *, print var, and browse var. A single set command can modify the parameters for more than one of these; the options -A or --print-all, -P or --print, and -B or --browse select which commands will be affected by the change. If none of these options is given, the default is to affect all commands.
The browser also maintains separate configuration parameters for the three different output formats. This applies to all parameters except for the format itself. The options -f or --flat, -p or --pretty, and -v or --verbose select which formats will be affected by the change. If none of these options is given, the default is to affect all formats. In the case that the format itself is being set, these options are ignored.
view [-vf2] [-w
window-cmd] [-s
server-cmd] [-n
server-name] [-t
timeout]
view -c [-v] [-s
server-cmd] [-n
server-name]
The debugger only updates one window at a time. If you try to open a new source window when there is already one open, this command aborts with an error message.
The variant with -c (or --close) does not open a new window but instead attempts to close a currently open source window. The attempt may fail if, for example, the user has modified the source file without saving.
The option -v (or --verbose) prints the underlying system calls before running them, and prints any output the calls produced. This is useful to find out what is wrong if the server does not start.
The option -f (or --force) stops the command from aborting if there is already a window open. Instead it attempts to close that window first.
The option -2 (or --split-screen) starts the vim server with two windows, which allows both the callee as well as the caller to be displayed at interface events. The lower window shows what would normally be seen if the split-screen option was not used, which at interface events is the caller. At these events, the upper window shows the callee definition. At internal events, the lower window shows the associated source, and the view in the upper window (which is not interesting at these events) remains unchanged.
The option -w (or --window-command) specifies the command to open a new window. The default is xterm -e.
The option -s (or --server-command) specifies the command to start the server. The default is vim.
The option -n (or --server-name) specifies the name of an existing server. Instead of starting up a new server, mdb will attempt to connect to the existing one.
The option -t (or --timeout) specifies the maximum number of seconds to wait for the server to start.
save_to_file [-x] goal
filenamesave_to_file [-x] exception
filenamesave_to_file [-x]
name filenamesave_to_file [-x]
num filenamebreak [-PS] [-E
ignore-count] [-I
ignore-count] [-n] [-p
print-spec]*
filename:
linenumberThe options -P or --print, and -S or --stop specify the action to be taken at the break point.
The options -Eignore-count and --ignore-entry ignore-count tell the debugger to ignore the breakpoint until after ignore-count occurrences of a call event that matches the breakpoint. The options -Iignore-count and --ignore-interface ignore-count tell the debugger to ignore the breakpoint until after ignore-count occurrences of interface events that match the breakpoint.
Each occurrence of the options -pprintspec and --print-list printspec tells the debugger to include the specified entity in the breakpoint's print list.
Normally, if a variable with the given name or number doesn't exist when execution reaches the breakpoint, mdb will issue a warning. The option -n or --no-warn, if present, suppresses this warning. This can be useful if e.g. the name is the name of an output variable, which of course won't be present at call events.
By default, the action of the break point is stop, the ignore count is zero, and the print list is empty.
break [-AOPSaei] [-E
ignore-count] [-I
ignore-count] [-n] [-p
print-spec]*
proc-specThe options -A or --select-all, and -O or --select-one select the action to be taken if the specification matches more than one procedure. If you have specified option -A or --select-all, mdb will put a breakpoint on all matched procedures, whereas if you have specified option -O or --select-one, mdb will report an error. By default, mdb will ask you whether you want to put a breakpoint on all matched procedures or just one, and if so, which one.
The options -P or --print, and -S or --stop specify the action to be taken at the break point.
The options -a or --all, -e or --entry, and -i or --interface specify the invocation conditions of the break point. If none of these options are specified, the default is the one indicated by the current scope (see the scope command below). The initial scope is interface.
The options -Eignore-count and --ignore-entry ignore-count tell the debugger to ignore the breakpoint until after ignore-count occurrences of a call event that matches the breakpoint. The options -Iignore-count and --ignore-interface ignore-count tell the debugger to ignore the breakpoint until after ignore-count occurrences of interface events that match the breakpoint.
Each occurrence of the options -pprintspec and --print-list printspec tells the debugger to include the specified entity in the breakpoint's print list.
Normally, if a variable with the given name or number doesn't exist when execution reaches the breakpoint, mdb will issue a warning. The option -n or --no-warn, if present, suppresses this warning. This can be useful if e.g. the name is the name of an output variable, which of course won't be present at call events.
By default, the action of the break point is stop, its invocation condition is interface, the ignore count is zero, and the print list is empty.
break [-PS] [-E
ignore-count] [-I
ignore-count] [-n] [-p
print-spec]* here
The options -P or --print, and -S or --stop specify the action to be taken at the break point.
The options -Eignore-count and --ignore-entry ignore-count tell the debugger to ignore the breakpoint until after ignore-count occurrences of a call event that matches the breakpoint. The options -Iignore-count and --ignore-interface ignore-count tell the debugger to ignore the breakpoint until after ignore-count occurrences of interface events that match the breakpoint.
Each occurrence of the options -pprintspec and --print-list printspec tells the debugger to include the specified entity in the breakpoint's print list.
Normally, if a variable with the given name or number doesn't exist when execution reaches the breakpoint, mdb will issue a warning. The option -n or --no-warn, if present, suppresses this warning. This can be useful if e.g. the name is the name of an output variable, which of course won't be present at call events.
By default, the action of the break point is stop, the ignore count is zero, and the print list is empty.
break info
condition [-n
break-num] [-p] [-v]
varname[
pathspec]
op termThe condition is a match between a variable live at the breakpoint, or a part thereof, and term. It is ok for term to contain spaces. The term from the program to be matched is specified by varname; if it is followed by pathspec (without a space), it specifies that the match is to be against the specified part of varname.
There are two kinds of values allowed for op. If op is = or ==, the condition is true if the term specified by varname (and pathspec, if present) matches term. If op is != or \\=, the condition is true if the term specified by varname (and pathspec, if present) doesn't match term. term may contain integers and strings (as long as the strings don't contain double quotes), but floats and characters aren't supported (yet), and neither is any special syntax for lists, operators, etc. term also may not contain variables, with one exception: any occurrence of _ in term matches any term.
If execution reaches a breakpoint and the condition cannot be evaluated, execution will normally stop at that breakpoint with a message to that effect. If the -p or --dont-require-path option is given, execution won't stop at breakpoints at which the specified part of the specified variable doesn't exist. If the -v or --dont-require-var option is given, execution won't stop at breakpoints at which the specified variable itself doesn't exist.
ignore [-E
ignore-count] [-I
ignore-count]
numignore [-E
ignore-count] [-I
ignore-count]
break_print [-fpv] [-e] [-n]
num print-spec*
Normally, if a variable with the given name or number doesn't exist when execution reaches the breakpoint, mdb will issue a warning. The option -n or --no-warn, if present, suppresses this warning. This can be useful if e.g. the name is the name of an output variable, which of course won't be present at call events.
Normally, the specified elements will be added at the start of the breakpoint's print list. The option -e or --end, if present, causes them to be added at the end.
By default, the specified elements will be printed with format "flat". The options -f or --flat, -p or --pretty, and -v or --verbose, if given, explicitly specify the format to use.
disable
numdisable *
disable
enable
numenable *
enable
delete
numdelete *
delete
modules
procedures
moduleregister
table_io
table_io start
table_io stop
table_io stats
mmc_options
option1 option2 ...
printlevel none
printlevel some
printlevel all
printlevel
echo on
echo off
echo
scroll on
scroll off
scroll
sizescroll
stack_default_limit
sizecontext none
context before
context after
context prevline
context nextline
context
goal_paths on
goal_paths off
goal_paths
scope all
scope interface
scope entry
scope
alias
name command [
command-parameter ...]
If name is the upper-case word EMPTY, the debugger will substitute the given command and parameters whenever the user types in an empty command line.
If name is the upper-case word NUMBER, the debugger will insert the given command and parameters before the command line whenever the user types in a command line that consists of a single number.
unalias
namedocument_category
slot categorydocument
category slot itemhelp
category itemhelp
wordhelp
The following commands relate to the declarative debugger. See Declarative debugging for details.
dd [-d
depth] [-s
search-mode]
When searching for bugs the declarative debugger needs to keep portions of the execution trace in memory. If it requires a new portion of the trace then it needs to rerun the program. The -ddepth and --depth-step-size depth options tell the declarative debugger how much of the execution trace to gather when it reruns the program. A higher depth will require more memory, but will improve the performance of the declarative debugger for long running programs since it will not have to rerun the program as much.
The -ssearch-mode or --search-mode search-mode option tells the declarative debugger which search mode to use. Either top-down or divide-and-query may be specified. See Search Modes for a more detailed description of the available search modes. top-down is the default when this option is not given.
trust
module-name|
proc-specIndividual predicates or functions can be trusted by just giving the predicate or function name. If there is more than one predicate or function with the given name then a list of alternatives will be shown.
The entire Mercury standard library is trusted by default and can be untrusted in the usual manner using the `untrust' command. To restore trusted status to the Mercury standard library issue the command `trust standard library' or just `trust std lib'.
See also `trusted' and `untrust'.
trusted
untrust
numhistogram_all
filenamehistogram_exp
filenameclear_histogram
source [-i]
filenameThe option -i or --ignore-errors tells mdb not to complain if the named file does not exist or is not readable.
save
filenamequit [-y]
End-of-file on the debugger's input is considered a quit command.
The following commands are intended for use by the developers of the Mercury implementation.
var_details
flag
flag
flagnameflag
flagname on
flag
flagname off
subgoal
nconsumer
ngen_stack
cut_stack
pneg_stack
mm_stacks
nondet_stack [-d] [-f
numframes] [
numlines]
The -f option, if present, specifies that only the topmost numframes stack frames should be printed.
The optional number numlines, if present, specifies that only the topmost numlines lines should be printed.
stack_regs
all_regs
debug_vars
proc_stats
proc_stats
filenamelabel_stats
label_stats
filenamevar_name_stats
var_name_stats
filenameprint_optionals
print_optionals on
print_optionals off
unhide_events
unhide_events on
unhide_events off
dd_dd
table
proc [
num1 ...]
For now, this command is supported only for procedures whose arguments are all either integers, floats or strings.
If the user specifies one or more integers on the command line, the output is restricted to the entries in the call table in which the nth argument is equal to the nth number on the command line.
type_ctor [-fr]
modulename typectorname arityIf the option -r or --print-rep option is given, it also prints the name of the type representation scheme used by the type constructor (known as its `type_ctor_rep' in the implementation).
If the option -f or --print-functors option is given, it also prints the names and arities of function symbols defined by type constructor.
all_type_ctors [-fr] [
modulename]
If the option -r or --print-rep option is given, it also prints the name of the type representation scheme of each type constructor (known as its `type_ctor_rep' in the implementation).
If the option -f or --print-functors option is given, it also prints the names and arities of function symbols defined by each type constructor.
class_decl [-im]
modulename typeclassname arityIf the option -m or --print-methods option is given, it also lists all the methods of the type class.
If the option -i or --print-instance option is given, it also lists all the instances of the type class.
all_class_decls [-im] [
modulename]
If the option -m or --print-methods option is given, it also lists all the methods of each type class.
If the option -i or --print-instance option is given, it also lists all the instances of each type class.
all_procedures [-su]
filenameIf the -s or --separate option is given, the various components of procedure names are separated by spaces.
If the -u or --uci option is given, the list will include the procedures of compiler generated unify, compare, index and initialization predicates. Normally, the list includes the procedures of only user defined predicates.
The debugger incorporates a declarative debugger which can be accessed from its command line. Starting from an event that exhibits a bug, e.g. an event giving a wrong answer, the declarative debugger can find a bug which explains that behaviour using knowledge of the intended interpretation of the program only.
Note that this is a work in progress, so there are some limitations in the implementation. The main limitations are pointed out in the following sections.
Every CALL event corresponds to an atomic goal, the one printed by the "print" command at that event. This atom has the actual arguments in the input argument positions and distinct free variables in the output argument positions (including the return value for functions). We refer to this as the call atom of the event.
The same view can be taken of EXIT events, although in this case the outputs as well as the inputs will be bound. We refer to this as the exit atom of the event. The exit atom is always an instance of the call atom for the corresponding CALL event.
Using these concepts, it is possible to interpret the events at which control leaves a procedure as assertions about the semantics of the program. These assertions may be true or false, depending on whether or not the program's actual semantics are consistent with its intended semantics.
If one of these assertions is wrong, then we consider the event to represent incorrect behaviour of the program. If the user encounters an event for which the assertion is wrong, then they can request the declarative debugger to diagnose the incorrect behaviour by giving the dd command to the procedural debugger at that event.
Once the dd command has been given, the declarative debugger asks the user a series of questions about the truth of various assertions in the intended interpretation. The first question in this series will be about the validity of the event for which the dd command was given. The answer to this question will nearly always be “no”, since the user has just implied the assertion is false by giving the dd command. Later questions will be about other events in the execution of the program, not all of them necessarily of the same kind as the first.
The user is expected to act as an “oracle” and provide answers to these questions based on their knowledge of the intended interpretation. The debugger provides some help here: previous answers are remembered and used where possible, so questions are not repeated unnecessarily. Commands are available to provide answers, as well as to browse the arguments more closely or to change the order in which the questions are asked. See the next section for details of the commands that are available.
When seeking to determine the validity of the assertion corresponding to an EXIT event, the declarative debugger prints the exit atom followed by the question Valid? for the user to answer. The atom is printed using the same mechanism that the debugger uses to print values, which means some arguments may be abbreviated if they are too large.
When seeking to determine the validity of the assertion corresponding to a FAIL event, the declarative debugger prints the call atom, prefixed by Call, followed by each of the exit atoms (indented, and on multiple lines if need be), and prints the question Complete? (or Unsatisfiable? if there are no solutions) for the user to answer. Note that the user is not required to provide any missing instance in the case that the answer is no. (A limitation of the current implementation is that it is difficult to browse a specific exit atom. This will hopefully be addressed in the near future.)
When seeking to determine the validity of the assertion corresponding to an EXCP event, the declarative debugger prints the call atom followed by the exception that was thrown, and prints the question Expected? for the user to answer.
In addition to asserting whether a call behaved correctly or not the user may also assert that a call should never have occurred in the first place, because its inputs violated some precondition of the call. For example if an unsorted list is passed to a predicate that is only designed to work with sorted lists. Such calls should be deemed inadmissible by the user. This tells the declarative debugger that either the call was given the wrong input by its caller or whatever generated the input is incorrect.
In some circumstances the declarative debugger provides a default answer to the question. If this is the case, the default answer will be shown in square brackets immediately after the question, and simply pressing return is equivalent to giving that answer.
At the above mentioned prompts, the following commands may be given. Each command (with the exception of pd) can also be abbreviated to just its first letter.
yes
no
inadmissible
trust
trust module
skip
browse [
n]
mark [
term-path]
pd
abort
help
It is also legal to press return without specifying a command. If there is a default answer (see Oracle questions), pressing return is equivalent to giving that answer. If there is no default answer, pressing return is equivalent to the skip command.
If the oracle keeps providing answers to the asked questions, then the declarative debugger will eventually locate a bug. A “bug”, for our purposes, is an assertion about some call which is false, but for which the assertions about every child of that call are not false (i.e. they are either correct or inadmissible). There are four different classes of bugs that this debugger can diagnose, one associated with each kind of assertion.
Assertions about EXIT events lead to a kind of bug we call an “incorrect contour”. This is a contour (an execution path through the body of a clause) which results in a wrong answer for that clause. When the debugger diagnoses a bug of this kind, it displays the exit atoms in the contour. The resulting incorrect exit atom is displayed last. The program event associated with this bug, which we call the “bug event”, is the exit event at the end of the contour.
Assertions about FAIL events lead to a kind of bug we call a “partially uncovered atom”. This is a call atom which has some instance which is valid, but which is not covered by any of the applicable clauses. When the debugger diagnoses a bug of this kind, it displays the call atom; it does not, however, provide an actual instance that satisfies the above condition. The bug event in this case is the fail event reached after all the solutions were exhausted.
Assertions about EXCP events lead to a kind of bug we call an “unhandled exception”. This is a contour which throws an exception that needs to be handled but which is not handled. When the debugger diagnoses a bug of this kind, it displays the call atom followed by the exception which was not handled. The bug event in this case is the exception event for the call in question.
If the assertion made by an EXIT, FAIL or EXCP event is false and one or more of the children of the call that resulted in the incorrect EXIT, FAIL or EXCP event is inadmissible, while all the other calls are correct, then an “inadmissible call” bug has been found. This is a call that behaved incorrectly (by producing the incorrect output, failing or throwing an exception) because it passed unexpected input to one of it's children. The guilty call is displayed as well as the inadmissible child.
After the diagnosis is displayed, the user is asked to confirm that the event located by the declarative debugger does in fact represent a bug. The user can answer yes or y to confirm the bug, no or n to reject the bug, or abort or a to abort the diagnosis.
If the user confirms the diagnosis, they are returned to the procedural debugger at the event which was found to be the bug event. This gives the user an opportunity, if they need it, to investigate (procedurally) the events in the neighbourhood of the bug.
If the user rejects the diagnosis, which implies that some of their earlier answers may have been mistakes, diagnosis is resumed from some earlier point determined by the debugger. The user may now be asked questions they have already answered, with the previous answer they gave being the default, or they may be asked entirely new questions.
If the user aborts the diagnosis, they are returned to the event at which the dd command was given.
Currently the declarative debugger can operate in one of two modes when searching for a bug. The mode to use can be specified as an option to the dd command. See Declarative debugging mdb commands for information on how to do this. The specified search mode will always be used unless a subterm is marked or the user hasn't answered `no' to any questions yet (In which case top-down search is used until `no' is answered to at least one question).
Using this mode the declarative debugger will ask about the children of the last atom whose assertion was false. This makes the search more predictable from the user's point of view as the questions will more or less follow the program execution. The drawback of top-down search is that it may require a lot of questions to be answered before a bug is found, especially with deeply recursive program runs.
This search mode is used by default when no other mode is specified.
With this search mode the declarative debugger will attempt to halve the size of the search space with each question. In many cases this will result in the bug being found after O(log(N)) questions where N is the number of events between the event where the dd command was given and the corresponding CALL event. This makes the search feasible for deeply recursive runs where top-down search would require an unreasonably large number of questions to be answered. However, the questions may appear to come from unrelated parts of the program which can make them harder to answer.
The number of questions asked by the declarative debugger before it pinpoints the location of a bug can be reduced by giving it extra information. The kind of extra information that can be given and how to convey this information are explained in this section.
An incorrect subterm can be marked by the user from within the interactive term browser (see Declarative debugging commands). The effect of marking a subterm depends on the whether the subterm was part of an input or an output argument.
If the subterm was an input then by marking the subterm the user is asserting that the call was inadmissible and that the marked input subterm is the reason for inadmissibility (i.e. the subterm's value violates a precondition of the call).
If the subterm was an output then the user is saying the exit atom is false in the intended interpretation and the marked subterm is the reason for the atom being false (i.e. if it were some other value then the atom would be true).
In either case the next question asked by the declarative debugger will be about the call that bound the incorrect subterm, unless that call was eliminated as a possible bug because of an answer to a previous question or the call that bound the subterm was not traced.
For example consider the following fragment of a program that calculates payments for a loan:
:- type payment ---> payment( date :: date, amount :: float ). :- type date ---> date(int, int, int). % date(day, month, year). :- pred get_payment(loan::in, int::in, payment::out) is det. get_payment(Loan, PaymentNo, Payment) :- get_payment_amount(Loan, PaymentNo, Amount), get_payment_date(Loan, PaymentNo, Date), Payment = payment(Date, Amount).
Suppose that get_payment
produces an incorrect result and the
declarative debugger asks:
get_payment(loan(...), 10, payment(date(9, 10, 1977), 10.000000000000)). Valid?
Then if we know that this is the right payment amount for the given loan,
but the date is incorrect, we can mark the date(...) subterm and the
debugger will then ask us about get_payment_date
:
get_payment(loan(...), 10, payment(date(9, 10, 1977), 10.000000000000)). Valid? browse browser> cd 3/1 browser> ls date(9, 10, 1977) browser> mark get_payment_date(loan(...), 10, dat(9, 10, 1977)). Valid?
Thus irrelevant questions about get_payment_amount
are avoided.
If, say, the date was only wrong in the year part, then we could also have marked the year subterm in which case the next question would have been about the call that constructed the year part of the date.
This feature is also useful when using the procedural debugger. For example, suppose that you come across a CALL event and you would like to know the source of a particular input to the call. To find out you could first go to the final event by issuing a finish command. Invoke the declarative debugger with a dd command and then mark the input term you are interested in. The next question should be about the call that bound the term. Issue a pd command at this point to return to the procedural debugger. It will now show the final event of the call that bound the term.
Note that this feature is only available if the executable is compiled in a .decldebug grade or with the --trace rep option. If a module is compiled with the --trace rep option but other modules in the program are not then you will not be able to track subterms through those other modules.
The declarative debugger can also be told to assume that certain predicates, functions or entire modules do not contain any bugs. The declarative debugger will never ask questions about trusted predicates or functions. It is a good idea to trust standard library modules imported by a program being debugged.
The declarative debugger can be told which predicates/functions it can trust before the dd command is given. This is done using the trust, trusted and untrust commands at the mdb prompt (see Declarative debugging mdb commands for details on how to use these commands).
Trust commands may be placed in the .mdbrc file which contains default settings for mdb (see Mercury debugger invocation). Trusted predicates will also be exported with a save command (see Miscellaneous commands).
During the declarative debugging session the user may tell the declarative debugger to trust the predicate or function in the current question. Alternatively the user may tell the declarative debugger to trust all the predicates and functions in the same module as the predicate or function in the current question. See the trust command in Declarative debugging commands.
The Mercury compiler allows compilation of predicates for execution using the Aditi2 deductive database system. There are several sources of useful information:
As an alternative to compiling stand-alone programs, you can execute queries using the Aditi query shell.
The Aditi interface library is installed as part of the Aditi installation process. To use the Aditi library in your programs, use the Mmakefile in $ADITI_HOME/demos/transactions as a template.
To obtain the best trade-off between productivity and efficiency, programmers should not spend too much time optimizing their code until they know which parts of the code are really taking up most of the time. Only once the code has been profiled should the programmer consider making optimizations that would improve efficiency at the expense of readability or ease of maintenance. A good profiler is therefore a tool that should be part of every software engineer's toolkit.
Mercury programs can be analyzed using two distinct profilers. The Mercury profiler mprof is a conventional call-graph profiler (or graph profiler for short) in the style of gprof. The Mercury deep profiler mdprof is a new kind of profiler that associates a lot more context with each measurement. mprof can be used to profile either time or space, but not both at the same time; mdprof can profile both time and space at the same time.
To enable profiling, your program must be built with profiling enabled. The two different profilers require different support, and thus you must choose which one to enable when you build your program.
If you are using Mmake, then you pass these options to all the relevant programs by setting the GRADEFLAGS variable in your Mmakefile, e.g. by adding the line GRADEFLAGS=--profiling. (For more information about the different grades, see Compilation model options.)
Enabling profiling has several effects. First, it causes the compiler to generate slightly modified code, which counts the number of times each predicate or function is called, and for every call, records the caller and callee. With deep profiling, there are other modifications as well, the most important impact of which is the loss of tail-recursion for groups of mutually tail-recursive predicates (self-tail-recursive predicates stay tail-recursive). Second, your program will be linked with versions of the library and runtime that were compiled with the same kind of profiling enabled. Third, if you enable graph profiling, the compiler will generate for each source file the static call graph for that file in module.prof.
Once you have created a profiled executable, you can gather profiling information by running the profiled executable on some test data that is representative of the intended uses of the program. The profiling version of your program will collect profiling information during execution, and save this information at the end of execution, provided execution terminates normally and not via an abort.
Executables compiled with --profiling save profiling data in the files Prof.Counts, Prof.Decls, and Prof.CallPair. (Prof.Decl contains the names of the procedures and their associated addresses, Prof.CallPair records the number of times each procedure was called by each different caller, and Prof.Counts records the number of times that execution was in each procedure when a profiling interrupt occurred.) Executables compiled with --memory-profiling will use two of those files (Prof.Decls and Prof.CallPair) and a two others: Prof.MemoryWords and Prof.MemoryCells. Executables compiled with --deep-profiling save profiling data in a single file, Deep.data.
It is also possible to combine mprof profiling results from multiple runs of your program. You can do by running your program several times, and typing mprof_merge_counts after each run. It is not (yet) possible to combine mdprof profiling results from multiple runs of your program.
Due to a known timing-related bug in our code, you may occasionally get segmentation violations when running your program with mprof profiling enabled. If this happens, just run it again — the problem occurs only very rarely. The same vulnerability does not occur with mdprof profiling.
With both profilers, you can control whether time profiling measures real (elapsed) time, user time plus system time, or user time only, by including the options -Tr, -Tp, or -Tv respectively in the environment variable MERCURY_OPTIONS when you run the program to be profiled. Currently, the -Tp and -Tv options don't work on Windows, so on Windows you must explicitly specify -Tr.
The default is user time plus system time, which counts all time spent executing the process, including time spent by the operating system working on behalf of the process, but not including time that the process was suspended (e.g. due to time slicing, or while waiting for input). When measuring real time, profiling counts even periods during which the process was suspended. When measuring user time only, profiling does not count time inside the operating system at all.
To display the graph profile information gathered from one or more profiling runs, just type mprof or mprof -c. (For programs built with --high-level-code, you need to also pass the --no-demangle option to mprof as well.) Note that mprof can take quite a while to execute (especially with -c), and will usually produce quite a lot of output, so you will usually want to redirect the output into a file with a command such as mprof > mprof.out.
The output of mprof -c consists of three major sections. These are named the call graph profile, the flat profile and the alphabetic listing. The output of mprof contains the flat profile and the alphabetic listing only.
The call graph profile presents the local call graph of each procedure. For each procedure it shows the parents (callers) and children (callees) of that procedure, and shows the execution time and call counts for each parent and child. It is sorted on the total amount of time spent in the procedure and all of its descendents (i.e. all of the procedures that it calls, directly or indirectly.)
The flat profile presents the just execution time spent in each procedure. It does not count the time spent in descendents of a procedure.
The alphabetic listing just lists the procedures in alphabetical order, along with their index number in the call graph profile, so that you can quickly find the entry for a particular procedure in the call graph profile.
The profiler works by interrupting the program at frequent intervals, and each time recording the currently active procedure and its caller. It uses these counts to determine the proportion of the total time spent in each procedure. This means that the figures calculated for these times are only a statistical approximation to the real values, and so they should be treated with some caution. In particular, if the profiler's assumption that calls to a procedure from different callers have roughly similar costs is not true, the graph profile can be quite misleading.
The time spent in a procedure and its descendents is calculated by propagating the times up the call graph, assuming that each call to a procedure from a particular caller takes the same amount of time. This assumption is usually reasonable, but again the results should be treated with caution. (The deep profiler does not make such an assumption, and hence its output is significantly more reliable.)
Note that any time spent in a C function (e.g. time spent in GC_malloc(), which does memory allocation and garbage collection) is credited to the Mercury procedure that called that C function.
Here is a small portion of the call graph profile from an example program.
called/total parents index %time self descendents called+self name index called/total children <spontaneous> [1] 100.0 0.00 0.75 0 call_engine_label [1] 0.00 0.75 1/1 do_interpreter [3] ----------------------------------------------- 0.00 0.75 1/1 do_interpreter [3] [2] 100.0 0.00 0.75 1 io.run/0(0) [2] 0.00 0.00 1/1 io.init_state/2(0) [11] 0.00 0.74 1/1 main/2(0) [4] ----------------------------------------------- 0.00 0.75 1/1 call_engine_label [1] [3] 100.0 0.00 0.75 1 do_interpreter [3] 0.00 0.75 1/1 io.run/0(0) [2] ----------------------------------------------- 0.00 0.74 1/1 io.run/0(0) [2] [4] 99.9 0.00 0.74 1 main/2(0) [4] 0.00 0.74 1/1 sort/2(0) [5] 0.00 0.00 1/1 print_list/3(0) [16] 0.00 0.00 1/10 io.write_string/3(0) [18] ----------------------------------------------- 0.00 0.74 1/1 main/2(0) [4] [5] 99.9 0.00 0.74 1 sort/2(0) [5] 0.05 0.65 1/1 list.perm/2(0) [6] 0.00 0.09 40320/40320 sorted/1(0) [10] ----------------------------------------------- 8 list.perm/2(0) [6] 0.05 0.65 1/1 sort/2(0) [5] [6] 86.6 0.05 0.65 1+8 list.perm/2(0) [6] 0.00 0.60 5914/5914 list.insert/3(2) [7] 8 list.perm/2(0) [6] ----------------------------------------------- 0.00 0.60 5914/5914 list.perm/2(0) [6] [7] 80.0 0.00 0.60 5914 list.insert/3(2) [7] 0.60 0.60 5914/5914 list.delete/3(3) [8] ----------------------------------------------- 40319 list.delete/3(3) [8] 0.60 0.60 5914/5914 list.insert/3(2) [7] [8] 80.0 0.60 0.60 5914+40319 list.delete/3(3) [8] 40319 list.delete/3(3) [8] ----------------------------------------------- 0.00 0.00 3/69283 tree234.set/4(0) [15] 0.09 0.09 69280/69283 sorted/1(0) [10] [9] 13.3 0.10 0.10 69283 compare/3(0) [9] 0.00 0.00 3/3 __Compare___io__stream/0(0) [20] 0.00 0.00 69280/69280 builtin_compare_int/3(0) [27] ----------------------------------------------- 0.00 0.09 40320/40320 sort/2(0) [5] [10] 13.3 0.00 0.09 40320 sorted/1(0) [10] 0.09 0.09 69280/69283 compare/3(0) [9] -----------------------------------------------
The first entry is call_engine_label and its parent is <spontaneous>, meaning that it is the root of the call graph. (The first three entries, call_engine_label, do_interpreter, and io.run/0 are all part of the Mercury runtime; main/2 is the entry point to the user's program.)
Each entry of the call graph profile consists of three sections, the parent procedures, the current procedure and the children procedures.
Reading across from the left, for the current procedure the fields are:
The predicate or function names are not just followed by their arity but also by their mode in brackets. A mode of zero corresponds to the first mode declaration of that predicate in the source code. For example, list.delete/3(3) corresponds to the (out, out, in) mode of list.delete/3.
Now for the parent and child procedures the self and descendent time have slightly different meanings. For the parent procedures the self and descendent time represent the proportion of the current procedure's self and descendent time due to that parent. These times are obtained using the assumption that each call contributes equally to the total time of the current procedure.
To create a memory profile, you can invoke mprof with the -m (--profile memory-words) option. This will profile the amount of memory allocated, measured in units of words. (A word is 4 bytes on a 32-bit architecture, and 8 bytes on a 64-bit architecture.)
Alternatively, you can use mprof's -M (--profile memory-cells) option. This will profile memory in units of “cells”. A cell is a group of words allocated together in a single allocation, to hold a single object. Selecting this option this will therefore profile the number of memory allocations, while ignoring the size of each memory allocation.
With memory profiling, just as with time profiling, you can use the -c (--call-graph) option to display call graph profiles in addition to flat profiles.
Note that Mercury's memory profiler will only tell you about allocation, not about deallocation (garbage collection). It can tell you how much memory was allocated by each procedure, but it won't tell you how long the memory was live for, or how much of that memory was garbage-collected. This is also true for mdprof.
To display the information contained in a deep profiling data file (which will be called Deep.data unless you renamed it), start up your browser and give it a URL of the form http://server.domain.name/cgi-bin/mdprof_cgi?/full/path/name/Deep.data. The server.domain.name part should be the name of a machine with the following qualifications: it should have a web server running on it, and it should have the mdprof_cgi program installed in its /usr/lib/cgi-bin directory. The /full/path/name/Deep.data part should be the full path name of the deep profiling data file whose data you wish to explore. The name of this file must not have percent signs in it.
On some operating systems, Mercury's profiling doesn't work properly with shared libraries. The symptom is errors (map.lookup failed) or warnings from mprof. On some systems, the problem occurs because the C implementation fails to conform to the semantics specified by the ISO C standard for programs that use shared libraries. For other systems, we have not been able to analyze the cause of the failure (but we suspect that the cause may be the same as on those systems where we have been able to analyze it).
If you get errors or warnings from mprof, and your program is dynamically linked, try rebuilding your application statically linked, e.g. by using MLFLAGS=--static in your Mmakefile. Another work-around that sometimes works is to set the environment variable LD_BIND_NOW to a non-null value before running the program.
This section contains a brief description of all the options available for mmc, the Mercury compiler. Sometimes this list is a little out-of-date; use mmc --help to get the most up-to-date list.
mmc
is invoked as
mmc [options] arguments
Arguments can be either module names or file names. Arguments ending in .m are assumed to be file names, while other arguments are assumed to be module names, with . (rather than __ or :) as module qualifier. If you specify a module name such as foo.bar.baz, the compiler will look for the source in files foo.bar.baz.m, bar.baz.m, and baz.m, in that order.
Options are either short (single-letter) options preceded by a single -, or long options preceded by --. Options are case-sensitive. We call options that do not take arguments flags. Single-letter flags may be grouped with a single -, e.g. -vVc. Single-letter flags may be negated by appending another trailing -, e.g. -v-. Long flags may be negated by preceding them with no-, e.g. --no-verbose.
-w
--inhibit-warnings
--halt-at-warn
--halt-at-syntax-error
--inhibit-accumulator-warnings
--no-warn-singleton-variables
--no-warn-missing-det-decls
--no-warn-det-decls-too-lax
--no-warn-inferred-erroneous
--no-warn-nothing-exported
--warn-unused-args
--warn-interface-imports
--warn-missing-opt-files
--warn-missing-trans-opt-files
--warn-non-stratification
--no-warn-simple-code
--warn-duplicate-calls
--no-warn-missing-module-name
--no-warn-wrong-module-name
--no-warn-smart-recompilation
--no-warn-undefined-options-variables
--warn-non-tail-recursion
--no-warn-target-code
--no-warn-up-to-date
--no-warn-stubs
--warn-dead-procs
--no-warn-table-with-inline
--no-warn-non-term-special-preds
-v
--verbose
-V
--very-verbose
-E
--verbose-error-messages
--no-verbose-make
--output-compile-error-lines
n--verbose-commands
--verbose-recompilation
--find-all-recompilation-reasons
-S
--statistics
-T
--debug-types
-N
--debug-modes
--debug-det
--debug-determinism
--debug-opt
--debug-opt-pred-id
predid--debug-pd
--debug-rl-gen
--debug-rl-opt
--debug-liveness <n>
--debug-make
These options are mutually exclusive. If more than one of these options is specified, only the first in this list will apply. If none of these options are specified, the default action is to compile and link the modules named on the command line to produce an executable.
-f
--generate-source-file-mapping
-M
--generate-dependencies
--generate-module-order
--generate-mmc-deps
--generate-mmc-make-module-dependencies
-i
--make-int
--make-interface
--make-short-int
--make-short-interface
--make-priv-int
--make-private-interface
--make-opt-int
--make-optimization-interface
--make-trans-opt
--make-transitive-optimization-interface
-P
--pretty-print
--convert-to-mercury
--typecheck-only
-e
--errorcheck-only
-C
--target-code-only
-c
--compile-only
--aditi-only
--output-grade-string
--output-link-command
--output-shared-lib-link-command
--smart-recompilation
--no-assume-gmake
--trace-level
level--trace-optimized
--no-delay-death
--stack-trace-higher-order
--generate-bytecode
--auto-comments
-n-
--no-line-numbers
--show-dependency-graph
-d
stage--dump-hlds
stage--dump-hlds-options
options--dump-hlds-pred-id
predid--dump-mlds
stage--verbose-dump-mlds
stage--mode-constraints
--simple-mode-constraints
--benchmark-modes
--benchmark-modes-repeat
num--dump-rl
--dump-rl-bytecode
--generate-schemas
See the Mercury language reference manual for detailed explanations of these options.
--no-reorder-conj
--no-reorder-disj
--fully-strict
error/1
.
--allow-stubs
--infer-all
--infer-types
--infer-modes
--no-infer-det
--no-infer-determinism
--type-inference-iteration-limit
n--mode-inference-iteration-limit
nFor detailed explanations, see the “Termination analysis” section of the “Implementation-dependent extensions” chapter in the Mercury Language Reference Manual.
--enable-term
--enable-termination
--chk-term
--check-term
--check-termination
--verb-chk-term
--verb-check-term
--verbose-check-termination
--term-single-arg
limit--termination-single-argument-analysis
limit--termination-norm
norm--term-err-limit
limit--termination-error-limit
limit--term-path-limit
limit--termination-path-limit
limitThe following compilation options affect the generated code in such a way that the entire program must be compiled with the same setting of these options, and it must be linked to a version of the Mercury library which has been compiled with the same setting. (Attempting to link object files compiled with different settings of these options will generally result in an error at link time, typically of the form undefined symbol MR_grade_... or symbol MR_runtime_grade multiply defined.)
The options below must be passed to mgnuc, c2init and ml as well as to mmc. If you are using Mmake, then you should specify these options in the GRADEFLAGS variable rather than specifying them in MCFLAGS, MGNUCFLAGS and MLFLAGS.
-s
grade--grade
gradeThe default grade is system-dependent; it is chosen at installation time by configure, the auto-configuration script, but can be overridden with the environment variable MERCURY_DEFAULT_GRADE if desired. Depending on your particular installation, only a subset of these possible grades will have been installed. Attempting to use a grade which has not been installed will result in an error at link time. (The error message will typically be something like ld: can't find library for -lmercury.)
The tables below show the options that are selected by each base grade and grade modifier; they are followed by descriptions of those options.
--target c --no-gcc-global-registers --no-gcc-nonlocal-gotos --no-asm-labels
.
--target c --gcc-global-registers --no-gcc-nonlocal-gotos --no-asm-labels
.
--target c --no-gcc-global-registers --gcc-nonlocal-gotos --no-asm-labels
.
--target c --gcc-global-registers --gcc-nonlocal-gotos --no-asm-labels
.
--target c --no-gcc-global-registers --gcc-nonlocal-gotos --asm-labels
.
--target c --gcc-global-registers --gcc-nonlocal-gotos --asm-labels
.
--target c --high-level-code
.
--target c --high-level-code --high-level-data
.
--target il --high-level-code --high-level-data
.
--target java --high-level-code --high-level-data
.
--gc boehm
.
--gc mps
--gc accurate
.
--profiling
.
--memory-profiling
.
--deep-profiling
.
--use-trail
.
--reserve-tag
.
--debug
.
--decl-debug
.
--target c
(grades: none, reg, jump, fast, asm_jump, asm_fast, hl, hlc)--target asm
(grades: hlc)--il
, --target il
(grades: il)--java
, --target java
(grades: java)--il-only
--dotnet-library-version
version-number--no-support-ms-clr
--support-rotor-clr
--compile-to-c
--compile-to-C
--java-only
--gcc-global-registers
(grades: reg, fast, asm_fast)--no-gcc-global-registers
(grades: none, jump, asm_jump)--gcc-non-local-gotos
(grades: jump, fast, asm_jump, asm_fast)--no-gcc-non-local-gotos
(grades: none, reg)--asm-labels
(grades: asm_jump, asm_fast)--no-asm-labels
(grades: none, reg, jump, fast)--pic-reg
(grades: any grade containing `.pic_reg')-H
, --high-level-code
(grades: hl, hlc, il, java)--high-level-data
(grades: hl, il, java)--debug
(grades: any grade containing .debug)--decl-debug
(grades: any grade containing .decldebug)--profiling
, --time-profiling
(grades: any grade containing .prof)--memory-profiling
(grades: any grade containing .memprof)--deep-profiling
(grades: any grade containing .profdeep)--gc {none, boehm, mps, accurate, automatic}
--garbage-collection {none, boehm, mps, accurate, automatic}
--use-trail
(grades: any grade containing .tr)Of the options listed below, the --num-tag-bits option may be useful for cross-compilation, but apart from that these options are all experimental and are intended for use by developers of the Mercury implementation rather than by ordinary Mercury programmers.
--tags {none, low, high}
--num-tag-bits
n--num-reserved-addresses
n--num-reserved-objects
nNote that reserved objects will only be used if reserved addresses
(see --num-reserved-addresses
) are not available, since the
latter are more efficient.
--reserve-tag
(grades: any grade containing .rt)--no-type-layout
--low-level-debug
--pic
--no-trad-passes
--no-reclaim-heap-on-nondet-failure
--no-reclaim-heap-on-semidet-failure
--no-reclaim-heap-on-failure
--fact-table-max-array-size
size--fact-table-hash-percent-full
percentageThe following options allow the Mercury compiler to optimize the generated C code based on the characteristics of the expected target architecture. The default values of these options will be whatever is appropriate for the host architecture that the Mercury compiler was installed on, so normally there is no need to set these options manually. They might come in handy if you are cross-compiling. But even when cross-compiling, it's probably not worth bothering to set these unless efficiency is absolutely paramount.
--have-delay-slot
--num-real-r-regs
n--num-real-f-regs
n--num-real-r-temps
n--num-real-f-temps
n-O
n--opt-level
n--optimization-level
nIn general, there is a trade-off between compilation speed and the speed of the generated code. When developing, you should normally use optimization level 0, which aims to minimize compilation time. It enables only those optimizations that in fact usually reduce compilation time. The default optimization level is level 2, which delivers reasonably good optimization in reasonable time. Optimization levels higher than that give better optimization, but take longer, and are subject to the law of diminishing returns. The difference in the quality of the generated code between optimization level 5 and optimization level 6 is very small, but using level 6 may increase compilation time and memory requirements dramatically.
Note that if you want the compiler to perform cross-module optimizations, then you must enable them separately; the cross-module optimizations are not enabled by any -O level, because they affect the compilation process in ways that require special treatment by mmake.
--opt-space
--optimize-space
--intermodule-optimization
--trans-intermod-opt
--transitive-intermodule-optimization
--no-read-opt-files-transitively
--use-opt-files
--use-trans-opt-files
--intermodule-analysis
--split-c-files
The --high-level-code back-end does not support --split-c-files.
N.B. When using mmake, the --split-c-files option should not be placed in the MCFLAGS variable. Instead, use the MODULE.split target, i.e. type mmake foo.split rather than mmake foo.
These optimizations are high-level transformations on our HLDS (high-level data structure).
--no-inlining
--no-inline-simple
--no-inline-builtins
--no-inline-single-use
--inline-compound-threshold
threshold--inline-simple-threshold
threshold--intermod-inline-simple-threshold
threshold--inline-vars-threshold
threshold--loop-invariants
--no-common-struct
--no-common-goal
--constraint-propagation
--local-constraint-propagation
--no-follow-code
--optimize-unused-args
--intermod-unused-args
--unneeded-code
--unneeded-code-copy-limit
--optimize-higher-order
--type-specialization
--user-guided-type-specialization
--higher-order-size-limit
--higher-order-arg-limit
--higher-order-arg-limit
--optimize-constant-propagation
--introduce-accumulators
--optimize-constructor-last-call
--optimize-dead-procs
--excess-assign
--optimize-duplicate-calls
--delay-constructs
--optimize-saved-vars
--deforestation
--deforestation-depth-limit
--deforestation-vars-threshold
--deforestation-size-threshold
--analyse-exceptions
These optimizations are applied to the medium level intermediate code.
--no-mlds-optimize
--no-optimize-tailcalls
--no-optimize-initializations
--eliminate-local-variables
These optimizations are applied during the process of generating low-level intermediate code from our high-level data structure.
--no-static-ground-terms
--no-smart-indexing
--dense-switch-req-density
percentage--dense-switch-size
size--lookup-switch-req-density
percentage--lookup-switch-size
size--string-switch-size
size--tag-switch-size
size--try-switch-size
size--binary-switch-size
size--no-middle-rec
--no-simple-neg
These optimizations are transformations that are applied to our low-level intermediate code before emitting C code.
--no-common-data
--no-llds-optimize
--no-optimize-peep
--no-optimize-jumps
--no-optimize-fulljumps
--pessimize-tailcalls
--checked-nondet-tailcalls
--no-use-local-vars
--no-optimize-labels
--optimize-dups
--no-optimize-frames
--no-optimize-delay-slot
--optimize-reassign
--optimize-repeat
nThese optimizations are applied during the process of generating C intermediate code from our low-level data structure.
--no-emit-c-loops
--use-macro-for-redo-fail
--procs-per-c-function
nThese optimizations are applied to the Aditi-RL code produced for predicates with :- pragma aditi(...) declarations (see Using Aditi).
--optimize-rl
--optimize-rl-cse
--optimize-rl-invariants
--optimize-rl-index
--detect-rl-streams
-m
--make
-r
--rebuild
--pre-link-command
command--extra-init-command
command-k
--keep-going
--install-prefix
dir--install-command
command--libgrade
grade--flags
file--flags-file
file--options-file
file--config-file
file--options-search-directory
dir--mercury-configuration-directory
dir--mercury-config-dir
dir-I
dir--search-directory
dir--intermod-directory
dir--use-search-directories-for-intermod
--use-subdirs
--use-grade-subdirs
-?
-h
--help
--filenames-from-stdin
--aditi-user
If you are using Mmake, you need to pass these options to the target code compiler (e.g. mgnuc) rather than to mmc.
--target-debug
--cc
compiler-name--c-include-directory
dir--c-debug
--no-c-optimize
--no-ansi-c
--inline-alloc
--cflags
options--cflag
option--javac
compiler-name--java-compiler
compiler-name--java-interpreter
interpreter-name--java-flags
options--java-flag
option--java-classpath
dir--java-object-file-extension
extension-o
filename--output-file
filename--ld-flags
options--ld-flags
optionmmc --output-link-command
to find out
which command is used.
--ld-flag should be used for single words which need
to be quoted when passed to the shell.
--ld-libflags
options--ld-libflag
optionmmc --output-shared-lib-link-command
to find out which command is used.
--ld-libflag should be used for single words which need
to be quoted when passed to the shell.
-L
directory--library-directory
directory-R
directory--runtime-library-directory
directory--shlib-linker-install-name-path
directory-l
library--library
library--link-object
object--mld
directory--mercury-library-directory
directory--ml
library--mercury-library
library--mercury-standard-library-directory
directory--mercury-stdlib-dir
directory--no-mercury-standard-library-directory
--no-mercury-stdlib-dir
--init-file-directory
directory--init-file
file--trace-init-file
file--linkage {shared,static}
--mercury-linkage {shared,static}
--no-strip
--no-demangle
--no-main
--allow-undefined
--no-use-readline
--runtime-flags
flags--extra-initialization-functions
--extra-inits
The shell scripts in the Mercury compilation environment will use the following environment variables if they are set. There should be little need to use these, because the default values will generally work fine.
MERCURY_DEFAULT_GRADE
MERCURY_STDLIB_DIR
MERCURY_NONSHARED_LIB_DIR
MERCURY_OPTIONS
-C
size-D
debugger-p
-P
num-T
time-methodCurrently, the -Tp and -Tv options don't work on Windows, so on Windows you must explicitly specify -Tr.
--heap-size
size--detstack-size
size--nondetstack-size
size--solutions-heap-size
size--trail-size
size-i
filename--mdb-in
filename-o
filename--mdb-out
filename-e
filename--mdb-err
filename-m
filename--mdb-tty
filename--debug-threads
--tabling-statistics
--mem-usage-report
MERCURY_COMPILER
MERCURY_MKINIT
MERCURY_DEBUGGER_INIT
The Mercury compiler takes special advantage of certain extensions provided by GNU C to generate much more efficient code. We therefore recommend that you use GNU C for compiling Mercury programs. However, if for some reason you wish to use another compiler, it is possible to do so. Here's what you need to do.
The Mercury foreign language interfaces allows pragma foreign_proc to specify multiple implementations (in different foreign programming languages) for a procedure.
If the compiler generates code for a procedure using a back-end for which there are multiple applicable foreign languages, it will choose the foreign language to use for each procedure according to a builtin ordering.
If the language specified in a foreign_proc is not available for a particular backend, it will be ignored.
If there are no suitable foreign_proc clauses for a particular procedure but there are Mercury clauses, they will be used instead.
--aditi
: Optional features compilation model options--aditi-only
: Output options--aditi-user
: Miscellaneous options--allow-stubs
: Language semantics options--allow-undefined
: Link options--analyse-exceptions
: High-level (HLDS -> HLDS) optimization options--asm-labels
: LLDS back-end compilation model options--asm-labels
: Grades and grade components--assume-gmake
: Auxiliary output options--auto-comments
: Auxiliary output options--benchmark-modes
: Auxiliary output options--benchmark-modes-repeat
num: Auxiliary output options--binary-switch-size
: Medium-level (HLDS -> LLDS) optimization options--c-debug
: Target code compilation options--c-include-directory
: Target code compilation options--cc
: C compilers--cc
: Target code compilation options--cflag
: Target code compilation options--cflags
: Target code compilation options--check-term
: Termination analysis options--check-termination
: Termination analysis options--checked-nondet-tailcalls
: Low-level (LLDS -> LLDS) optimization options--chk-term
: Termination analysis options--common-data
: Low-level (LLDS -> LLDS) optimization options--common-goal
: High-level (HLDS -> HLDS) optimization options--common-struct
: High-level (HLDS -> HLDS) optimization options--compile-only
: Output options--compile-to-c
: Target options--config-file
: Build system options--constraint-propagation
: High-level (HLDS -> HLDS) optimization options--convert-to-mercury
: Output options--debug
: Optional features compilation model options--debug
: Quick overview--debug-det
: Verbosity options--debug-determinism
: Verbosity options--debug-liveness
: Verbosity options--debug-make
: Verbosity options--debug-modes
: Verbosity options--debug-opt
: Verbosity options--debug-opt-pred-id
: Verbosity options--debug-pd
: Verbosity options--debug-rl-gen
: Verbosity options--debug-rl-opt
: Verbosity options--debug-threads (runtime option)
: Environment--debug-types
: Verbosity options--decl-debug
: Optional features compilation model options--deep-profiling
: Optional features compilation model options--deep-profiling
: Grades and grade components--deforestation
: High-level (HLDS -> HLDS) optimization options--deforestation-depth-limit
: High-level (HLDS -> HLDS) optimization options--deforestation-size-threshold
: High-level (HLDS -> HLDS) optimization options--deforestation-vars-threshold
: High-level (HLDS -> HLDS) optimization options--delay-constructs
: High-level (HLDS -> HLDS) optimization options--delay-death
: Auxiliary output options--demangle
: Using mprof for time profiling--dense-switch-req-density
: Medium-level (HLDS -> LLDS) optimization options--dense-switch-size
: Medium-level (HLDS -> LLDS) optimization options--detect-rl-streams
: Aditi-RL optimization options--detstack-size
: Running--detstack-size (runtime option)
: Environment--dotnet-library-version
: Target options--dump-hlds
: Auxiliary output options--dump-hlds-options
: Auxiliary output options--dump-hlds-pred-id
: Auxiliary output options--dump-mlds
: Auxiliary output options--dump-rl
: Auxiliary output options--dump-rl-bytecode
: Auxiliary output options--eliminate-local-variables
: MLDS backend (MLDS -> MLDS) optimization options--enable-term
: Termination analysis options--enable-termination
: Termination analysis options--errorcheck-only
: Output options--excess-assign
: High-level (HLDS -> HLDS) optimization options--extra-init-command
: Build system options--extra-initialization-functions
: Link options--extra-inits
: Link options--fact-table-hash-percent-full
: Code generation options--fact-table-max-array-size
size: Code generation options--filenames-from-stdin
: Miscellaneous options--find-all-recompilation-reasons
: Verbosity options--flags
: Build system options--flags-file
: Build system options--follow-code
: High-level (HLDS -> HLDS) optimization options--fully-strict
: Language semantics options--garbage-collection
: Optional features compilation model options--gc
: Optional features compilation model options--gc
: Grades and grade components--gcc-global-registers
: LLDS back-end compilation model options--gcc-global-registers
: Grades and grade components--gcc-non-local-gotos
: LLDS back-end compilation model options--gcc-nonlocal-gotos
: Grades and grade components--generate-bytecode
: Auxiliary output options--generate-dependencies
: Output options--generate-mmc-deps
: Output options--generate-mmc-make-module-dependencies
: Output options--generate-schemas
: Auxiliary output options--generate-source-file-mapping
: Output options--grade
: Grades and grade components--halt-at-syntax-error
: Warning options--halt-at-warn
: Warning options--have-delay-slot
: Code generation target options--heap-size (runtime option)
: Environment--help
: Miscellaneous options--help
: Invocation--high-level-code
: Overall optimization options--high-level-code
: MLDS back-end compilation model options--high-level-code
: Grades and grade components--high-level-code
: Using mprof for time profiling--high-level-data
: MLDS back-end compilation model options--higher-order-size-limit
: High-level (HLDS -> HLDS) optimization options--il
: Grades and grade components--il-only
: Target options--infer-all
: Language semantics options--infer-det
: Language semantics options--infer-determinism
: Language semantics options--infer-modes
: Language semantics options--infer-types
: Language semantics options--inhibit-accumulator-warnings
: Warning options--inhibit-warnings
: Warning options--init-file
: Link options--init-file-directory
: Link options--inline-alloc
: Target code compilation options--inline-compound-threshold
: High-level (HLDS -> HLDS) optimization options--inline-simple
: High-level (HLDS -> HLDS) optimization options--inline-simple-threshold
: High-level (HLDS -> HLDS) optimization options--inline-single-use
: High-level (HLDS -> HLDS) optimization options--inline-vars-threshold
: High-level (HLDS -> HLDS) optimization options--inlining
: High-level (HLDS -> HLDS) optimization options--install-command
: Build system options--install-prefix
: Build system options--intermod-directory
: Build system options--intermod-inline-simple-threshold
: High-level (HLDS -> HLDS) optimization options--intermod-unused-args
: High-level (HLDS -> HLDS) optimization options--intermodule-analysis
: Overall optimization options--intermodule-optimization
: Overall optimization options--intermodule-optimization
: Using libraries--intermodule-optimization
: Building libraries--introduce-accumulators
: High-level (HLDS -> HLDS) optimization options--java
: Grades and grade components--java-classpath
: Target code compilation options--java-compiler
: Target code compilation options--java-flag
: Target code compilation options--java-flags
: Target code compilation options--java-interpreter
: Target code compilation options--java-object-file-extension
: Target code compilation options--java-only
: Target options--javac
: Target code compilation options--keep-going
: Build system options--ld-flag
: Link options--ld-flags
: Link options--ld-libflag
: Link options--ld-libflags
: Link options--libgrade
: Build system options--library
: Link options--library-directory
: Link options--line-numbers
: Auxiliary output options--link-object
: Link options--linkage
: Link options--llds-optimize
: Low-level (LLDS -> LLDS) optimization options--llds-optimize
: Auxiliary output options--local-constraint-propagation
: High-level (HLDS -> HLDS) optimization options--lookup-switch-req-density
: Medium-level (HLDS -> LLDS) optimization options--lookup-switch-size
: Medium-level (HLDS -> LLDS) optimization options--loop-invariants
: High-level (HLDS -> HLDS) optimization options--low-level-debug
: Code generation options--make
: Build system options--make
: Output options--make
: Verbosity options--make
: Warning options--make
: Using Mmake--make-int
: Output options--make-int
: Using mmc--make-int
: Filenames--make-interface
: Output options--make-interface
: Filenames--make-opt-int
: Output options--make-opt-int
: Using mmc--make-optimization-interface
: Output options--make-optimization-interface
: Filenames--make-priv-int
: Output options--make-priv-int
: Using mmc--make-priv-interface
: Filenames--make-private-interface
: Output options--make-private-interface
: Filenames--make-short-int
: Output options--make-short-int
: Using mmc--make-short-int
: Filenames--make-short-interface
: Output options--make-short-interface
: Filenames--make-trans-opt
: Output options--make-trans-opt
: Using mmc--make-trans-opt-int
: Filenames--make-transitive-optimization-interface
: Output options--make-transitive-optimization-interface
: Filenames--mdb-err (runtime option)
: Environment--mdb-in (runtime option)
: Environment--mdb-out (runtime option)
: Environment--mdb-tty (runtime option)
: Environment--mem-usage-report
: Environment--memory-profiling
: Optional features compilation model options--memory-profiling
: Grades and grade components--mercury-config-dir
: Build system options--mercury-configuration-directory
: Build system options--mercury-library
: Link options--mercury-library
: Using libraries--mercury-library-directory
: Link options--mercury-library-directory
: Using libraries--mercury-linkage
: Link options--mercury-standard-library-directory
: Link options--mercury-stdlib-dir
: Link options--middle-rec
: Medium-level (HLDS -> LLDS) optimization options--ml
: Link options--ml
: Using libraries--mld
: Link options--mld
: Using libraries--mlds-optimize
: MLDS backend (MLDS -> MLDS) optimization options--mode-constraints
: Auxiliary output options--no-
: Invocation overview--no-ansi-c
: Target code compilation options--no-asm-labels
: LLDS back-end compilation model options--no-asm-labels
: Grades and grade components--no-assume-gmake
: Auxiliary output options--no-c-optimize
: Target code compilation options--no-common-data
: Low-level (LLDS -> LLDS) optimization options--no-common-goal
: High-level (HLDS -> HLDS) optimization options--no-common-struct
: High-level (HLDS -> HLDS) optimization options--no-delay-death
: Auxiliary output options--no-demangle
: Link options--no-demangle
: Using mprof for time profiling--no-eliminate-local-variables
: MLDS backend (MLDS -> MLDS) optimization options--no-emit-c-loops
: Output-level (LLDS -> C) optimization options--no-follow-code
: High-level (HLDS -> HLDS) optimization options--no-gcc-global-registers
: LLDS back-end compilation model options--no-gcc-global-registers
: Grades and grade components--no-gcc-non-local-gotos
: LLDS back-end compilation model options--no-gcc-nonlocal-gotos
: Grades and grade components--no-high-level-code
: Grades and grade components--no-infer-det
: Language semantics options--no-infer-determinism
: Language semantics options--no-inline-builtins
: High-level (HLDS -> HLDS) optimization options--no-inline-simple
: High-level (HLDS -> HLDS) optimization options--no-inline-single-use
: High-level (HLDS -> HLDS) optimization options--no-inlining
: High-level (HLDS -> HLDS) optimization options--no-line-numbers
: Auxiliary output options--no-llds-optimize
: Low-level (LLDS -> LLDS) optimization options--no-llds-optimize
: Auxiliary output options--no-main
: Link options--no-mercury-standard-library-directory
: Link options--no-mercury-stdlib-dir
: Link options--no-middle-rec
: Medium-level (HLDS -> LLDS) optimization options--no-mlds-optimize
: MLDS backend (MLDS -> MLDS) optimization options--no-optimize-delay-slot
: Low-level (LLDS -> LLDS) optimization options--no-optimize-frames
: Low-level (LLDS -> LLDS) optimization options--no-optimize-fulljumps
: Low-level (LLDS -> LLDS) optimization options--no-optimize-initializations
: MLDS backend (MLDS -> MLDS) optimization options--no-optimize-jumps
: Low-level (LLDS -> LLDS) optimization options--no-optimize-labels
: Low-level (LLDS -> LLDS) optimization options--no-optimize-peep
: Low-level (LLDS -> LLDS) optimization options--no-optimize-tailcalls
: MLDS backend (MLDS -> MLDS) optimization options--no-read-opt-files-transitively
: Overall optimization options--no-reclaim-heap-on-failure
: Code generation options--no-reclaim-heap-on-nondet-failure
: Code generation options--no-reclaim-heap-on-semidet-failure
: Code generation options--no-reorder-conj
: Language semantics options--no-reorder-disj
: Language semantics options--no-simple-neg
: Medium-level (HLDS -> LLDS) optimization options--no-smart-indexing
: Medium-level (HLDS -> LLDS) optimization options--no-static-ground-terms
: Medium-level (HLDS -> LLDS) optimization options--no-strip
: Link options--no-support-ms-clr
: Target options--no-trad-passes
: Code generation options--no-trad-passes
: Verbosity options--no-type-layout
: Developer compilation model options--no-use-readline
: Link options--no-verbose-make
: Verbosity options--no-warn-det-decls-too-lax
: Warning options--no-warn-inferred-erroneous
: Warning options--no-warn-missing-det-decls
: Warning options--no-warn-missing-module-name
: Warning options--no-warn-non-term-special-preds
: Warning options--no-warn-nothing-exported
: Warning options--no-warn-simple-code
: Warning options--no-warn-singleton-variables
: Warning options--no-warn-smart-recompilation
: Warning options--no-warn-stubs
: Warning options--no-warn-table-with-inline
: Warning options--no-warn-target-code
: Warning options--no-warn-undefined-options-variables
: Warning options--no-warn-up-to-date
: Warning options--no-warn-wrong-module-name
: Warning options--nondetstack-size
: Running--nondetstack-size (runtime option)
: Environment--num-real-f-regs
: Code generation target options--num-real-f-temps
: Code generation target options--num-real-r-regs
: Code generation target options--num-real-r-temps
: Code generation target options--num-tag-bits
: Developer compilation model options--opt-level
: Overall optimization options--opt-space
: Overall optimization options--optimization-level
: Overall optimization options--optimize-constant-propagation
: High-level (HLDS -> HLDS) optimization options--optimize-constructor-last-call
: High-level (HLDS -> HLDS) optimization options--optimize-dead-procs
: High-level (HLDS -> HLDS) optimization options--optimize-dead-procs
: Overall optimization options--optimize-delay-slot
: Low-level (LLDS -> LLDS) optimization options--optimize-duplicate-calls
: High-level (HLDS -> HLDS) optimization options--optimize-dups
: Low-level (LLDS -> LLDS) optimization options--optimize-frames
: Low-level (LLDS -> LLDS) optimization options--optimize-fulljumps
: Low-level (LLDS -> LLDS) optimization options--optimize-higher-order
: High-level (HLDS -> HLDS) optimization options--optimize-initializations
: MLDS backend (MLDS -> MLDS) optimization options--optimize-jumps
: Low-level (LLDS -> LLDS) optimization options--optimize-labels
: Low-level (LLDS -> LLDS) optimization options--optimize-peep
: Low-level (LLDS -> LLDS) optimization options--optimize-reassign
: Low-level (LLDS -> LLDS) optimization options--optimize-repeat
: Low-level (LLDS -> LLDS) optimization options--optimize-rl
: Aditi-RL optimization options--optimize-rl-cse
: Aditi-RL optimization options--optimize-rl-index
: Aditi-RL optimization options--optimize-rl-invariants
: Aditi-RL optimization options--optimize-saved-vars
: High-level (HLDS -> HLDS) optimization options--optimize-space
: Overall optimization options--optimize-tailcalls
: MLDS backend (MLDS -> MLDS) optimization options--optimize-unused-args
: High-level (HLDS -> HLDS) optimization options--options-file
: Build system options--options-search-directory
: Build system options--output-compile-error-lines
: Verbosity options--output-file
: Link options--output-grade-string
: Output options--output-link-command
: Output options--output-shared-lib-link-command
: Output options--pessimize-tailcalls
: Low-level (LLDS -> LLDS) optimization options--pic
: Code generation options--pic-reg
: LLDS back-end compilation model options--pre-link-command
: Build system options--pretty-print
: Output options--procs-per-c-function
: Output-level (LLDS -> C) optimization options--profiling
: Optional features compilation model options--profiling
: Grades and grade components--rebuild
: Build system options--reclaim-heap-on-failure
: Code generation options--reclaim-heap-on-nondet-failure
: Code generation options--reclaim-heap-on-semidet-failure
: Code generation options--reorder-conj
: Language semantics options--reorder-disj
: Language semantics options--reserve-tag
: Developer compilation model options--reserved-addresses
: Developer compilation model options--runtime-flags
: Link options--runtime-library-directory
: Link options--search-directory
: Build system options--shlib-linker-install-name-path
: Link options--show-dependency-graph
: Auxiliary output options--simple-mode-constraints
: Auxiliary output options--simple-neg
: Medium-level (HLDS -> LLDS) optimization options--smart-indexing
: Medium-level (HLDS -> LLDS) optimization options--smart-recompilation
: Auxiliary output options--smart-recompilation
: Filenames--solutions-heap-size (runtime option)
: Environment--split-c-files
: Overall optimization options--stack-trace-higher-order
: Auxiliary output options--static-ground-terms
: Medium-level (HLDS -> LLDS) optimization options--statistics
: Verbosity options--string-switch-size
: Medium-level (HLDS -> LLDS) optimization options--support-rotor-clr
: Target options--tabling-statistics
: Environment--tag-switch-size
: Medium-level (HLDS -> LLDS) optimization options--tags
: Developer compilation model options--target
: Grades and grade components--target-code-only
: Output options--target-debug
: Target code compilation options--term-err-limit
: Termination analysis options--term-path-limit
: Termination analysis options--term-single-arg
limit: Termination analysis options--termination-error-limit
: Termination analysis options--termination-norm
: Termination analysis options--termination-path-limit
: Termination analysis options--termination-single-argument-analysis
: Termination analysis options--time-profiling
: Optional features compilation model options--trace-init-file
: Link options--trace-level
level: Auxiliary output options--trace-optimized
: Auxiliary output options--trad-passes
: Code generation options--trad-passes
: Verbosity options--trail-size
: Environment--trans-intermod-opt
: Overall optimization options--trans-intermod-opt
: Building libraries--transitive-intermodule-optimization
: Overall optimization options--transitive-intermodule-optimization
: Using mmc--try-switch-size
: Medium-level (HLDS -> LLDS) optimization options--type-inference-iteration-limit
: Language semantics options--type-layout
: Developer compilation model options--type-specialization
: High-level (HLDS -> HLDS) optimization options--typecheck-only
: Output options--unneeded-code
: High-level (HLDS -> HLDS) optimization options--unneeded-code-copy-limit
: High-level (HLDS -> HLDS) optimization options--use-grade-subdirs
: Build system options--use-local-vars
: Low-level (LLDS -> LLDS) optimization options--use-macro-for-redo-fail
: Output-level (LLDS -> C) optimization options--use-opt-files
: Overall optimization options--use-search-directories-for-intermod
: Build system options--use-subdirs
: Build system options--use-subdirs
: Filenames--use-trail
: Optional features compilation model options--use-trans-opt-files
: Overall optimization options--user-guided-type-specialization
: High-level (HLDS -> HLDS) optimization options--verb-check-term
: Termination analysis options--verb-chk-term
: Termination analysis options--verbose
: Verbosity options--verbose-check-termination
: Termination analysis options--verbose-commands
: Verbosity options--verbose-dump-mlds
: Auxiliary output options--verbose-error-messages
: Verbosity options--verbose-recompilation
: Verbosity options--very-verbose
: Verbosity options--warn-dead-procs
: Warning options--warn-det-decls-too-lax
: Warning options--warn-duplicate-calls
: Warning options--warn-inferred-erroneous
: Warning options--warn-interface-imports
: Warning options--warn-missing-det-decls
: Warning options--warn-missing-module-name
: Warning options--warn-missing-opt-files
: Warning options--warn-missing-trans-opt-files
: Warning options--warn-non-stratification
: Warning options--warn-non-tail-recursion
: Warning options--warn-nothing-exported
: Warning options--warn-simple-code
: Warning options--warn-singleton-variables
: Warning options--warn-smart-recompilation
: Warning options--warn-stubs
: Warning options--warn-unused-args
: Warning options--warn-up-to-date
: Warning options--warn-wrong-module-name
: Warning options-?
: Miscellaneous options-c
: Output options-C
: Output options-c
: Using mmc-C (runtime option)
: Environment-d
: Auxiliary output options-D (runtime option)
: Environment-e
: Output options-E
: Verbosity options-e (runtime option)
: Environment-fpic
: LLDS back-end compilation model options-h
: Miscellaneous options-H
: MLDS back-end compilation model options-I
: Build system options-i
: Output options-i (runtime option)
: Environment-k
: Build system options-l
: Link options-L
: Link options-m
: Build system options-M
: Output options-m (runtime option)
: Environment-N
: Verbosity options-n-
: Auxiliary output options-o
: Link options-O
: Overall optimization options-o
: Using mmc-o (runtime option)
: Environment-P
: Output options-P (runtime option)
: Environment-p (runtime option)
: Environment-R
: Link options-r
: Build system options-s
: Grades and grade components-S
: Verbosity options-T
: Verbosity options-T (runtime option)
: Environment-V
: Verbosity options-v
: Verbosity options-w
: Warning options/debug
: Target code compilation optionsalias (mdb command)
: Parameter commandsall_class_decls (mdb command)
: Developer commandsall_procedures (mdb command)
: Developer commandsall_regs (mdb command)
: Developer commandsall_type_ctors (mdb command)
: Developer commandsAR
: Building librariesARFLAGS
: Building librariesbreak (mdb command)
: Breakpoint commandsbreak_print (mdb command)
: Breakpoint commandsbrowse (mdb command)
: Browsing commandsc2init
: Using mmcC2INITARGS
: Using MmakeC2INITFLAGS
: Using Mmakecc_query (mdb command)
: Interactive query commandsCFLAGS
: Using Mmakeclass_decl (mdb command)
: Developer commandsclear_histogram (mdb command)
: Experimental commandscondition (mdb command)
: Breakpoint commandsconsumer (mdb command)
: Developer commandscontext (mdb command)
: Parameter commandscontinue (mdb command)
: Forward movement commandscurrent (mdb command)
: Browsing commandscut_stack (mdb command)
: Developer commandsdebug_vars (mdb command)
: Developer commandsdelete (mdb command)
: Breakpoint commandsdepth (mdb command)
: Browsing commandsdisable (mdb command)
: Breakpoint commandsdocument (mdb command)
: Help commandsdocument_category (mdb command)
: Help commandsdown (mdb command)
: Browsing commandsecho (mdb command)
: Parameter commandsenable (mdb command)
: Breakpoint commandsexception (mdb command)
: Forward movement commandsEXTRA_C2INITARGS
: Using MmakeEXTRA_C2INITFLAGS
: Using MmakeEXTRA_CFLAGS
: Using MmakeEXTRA_GRADEFLAGS
: Using MmakeEXTRA_JAVACFLAGS
: Using MmakeEXTRA_LD_LIBFLAGS
: Using MmakeEXTRA_LDFLAGS
: Using MmakeEXTRA_LIB_DIRS
: Using librariesEXTRA_LIB_DIRS
: Using MmakeEXTRA_LIBRARIES
: Using librariesEXTRA_LIBRARIES
: Using MmakeEXTRA_MC_MAKE_FLAGS
: Using MmakeEXTRA_MCFLAGS
: Using MmakeEXTRA_MGNUCFLAGS
: Using MmakeEXTRA_MLFLAGS
: Using MmakeEXTRA_MLLIBS
: Using MmakeEXTRA_MLOBJS
: Using MmakeEXTRA_MS_CLFLAGS
: Using Mmakefinish (mdb command)
: Forward movement commandsflag (mdb command)
: Developer commandsformat (mdb command)
: Browsing commandsforward (mdb command)
: Forward movement commandsgen_stack (mdb command)
: Developer commandsgoal_path (mdb command)
: Parameter commandsgoto (mdb command)
: Forward movement commandsGRADEFLAGS
: Compilation model optionsGRADEFLAGS
: Using Mmakehelp (mdb command)
: Help commandshistogram_all (mdb command)
: Experimental commandshistogram_exp (mdb command)
: Experimental commandsignore (mdb command)
: Breakpoint commandsINSTALL
: Installing librariesINSTALL
: Using MmakeINSTALL_MKDIR
: Installing librariesINSTALL_MKDIR
: Using MmakeINSTALL_PREFIX
: Installing librariesINSTALL_PREFIX
: Using Mmakeio_query (mdb command)
: Interactive query commandsJAVACFLAGS
: Using Mmakelabel_stats (mdb command)
: Developer commandsLD_BIND_NOW
: Profiling and shared librariesLD_LIBFLAGS
: Using MmakeLDFLAGS
: Using Mmakelevel (mdb command)
: Browsing commandsLIBGRADES
: Installing librariesLIBGRADES
: Using Mmakelines (mdb command)
: Browsing commandsLINKAGE
: Using MmakeMAIN_TARGET
: Using Mmakemake --- see Mmake
: Using Mmakemaxdepth (mdb command)
: Forward movement commandsMC
: Using MmakeMC_BUILD_FILES
: Using MmakeMC_MAKE_FLAGS
: Using MmakeMCFLAGS
: Compilation model optionsMCFLAGS
: Using Mmakemdb
: Quick overviewmdb
: Debuggingmdprof
: Using mdprofmdprof
: Creating profilesmdprof
: Building profiled applicationsmdprof
: ProfilingMERCURY_COMPILER
: EnvironmentMERCURY_DEBUGGER_INIT
: EnvironmentMERCURY_DEBUGGER_INIT
: Mercury debugger invocationMERCURY_DEFAULT_GRADE
: C compilersMERCURY_DEFAULT_GRADE
: EnvironmentMERCURY_DEFAULT_GRADE
: Grades and grade componentsMERCURY_LINKAGE
: Using MmakeMERCURY_MAIN_MODULES
: Using MmakeMERCURY_MKINIT
: EnvironmentMERCURY_NONSHARED_LIB_DIR
: EnvironmentMERCURY_OPTIONS
: EnvironmentMERCURY_OPTIONS
: RunningMERCURY_STDLIB_DIR
: EnvironmentMGNUC
: Using Mmakemgnuc
: Using mmcMGNUCFLAGS
: Compilation model optionsMGNUCFLAGS
: Using Mmakemindepth (mdb command)
: Forward movement commandsML
: Building librariesML
: Using Mmakeml
: Using mmcMLFLAGS
: Compilation model optionsMLFLAGS
: Building librariesMLFLAGS
: Using MmakeMLLIBS
: Building librariesMLLIBS
: Using MmakeMLOBJS
: Building librariesMLOBJS
: Using MmakeMLPICOBJS
: Building librariesmm_stacks (mdb command)
: Developer commandsmmake
: Using Mmakemmc
: Using mmcmmc_options (mdb command)
: Parameter commandsmodules (mdb command)
: Breakpoint commandsmprof
: Profiling and shared librariesmprof
: Using mprof for memory profilingmprof
: Using mprof for time profilingmprof
: Creating profilesmprof
: Building profiled applicationsmprof
: ProfilingMS_CL_NOASM
: Using MmakeMS_CLFLAGS
: Using Mmakenext (mdb command)
: Forward movement commandsnondet_stack (mdb command)
: Developer commandsOptimizing code size
: Overall optimization optionsOptimizing space
: Overall optimization optionspneg_stack (mdb command)
: Developer commandsprint (mdb command)
: Browsing commandsprint_optionals (mdb command)
: Developer commandsprintlevel (mdb command)
: Parameter commandsproc_stats (mdb command)
: Developer commandsprocedures (mdb command)
: Breakpoint commandsquery (mdb command)
: Interactive query commandsquit (mdb command)
: Miscellaneous commandsRANLIB
: Building librariesRANLIBFLAGS
: Building librariesregister (mdb command)
: Breakpoint commandsretry (mdb command)
: Backward movement commandsreturn (mdb command)
: Forward movement commandssave (mdb command)
: Miscellaneous commandssave_to_file (mdb command)
: Browsing commandsscroll (mdb command)
: Parameter commandsset (mdb command)
: Browsing commandssize (mdb command)
: Browsing commandssource (mdb command)
: Miscellaneous commandsstack (mdb command)
: Browsing commandsstack_default_limit (mdb command)
: Parameter commandsstack_regs (mdb command)
: Developer commandsstep (mdb command)
: Forward movement commandssubgoal (mdb command)
: Developer commandstable (mdb command)
: Developer commandstable_io (mdb command)
: I/O tabling commandstrust (mdb command)
: Declarative debugging mdb commandstrusted (mdb command)
: Declarative debugging mdb commandstype_ctor (mdb command)
: Developer commandsunalias (mdb command)
: Parameter commandsunhide_events (mdb command)
: Developer commandsuntrust (mdb command)
: Declarative debugging mdb commandsup (mdb command)
: Browsing commandsvar_details (mdb command)
: Developer commandsvar_name_stats (mdb command)
: Developer commandsvars (mdb command)
: Browsing commandsview (mdb command)
: Browsing commandswidth (mdb command)
: Browsing commandsxml_browser_cmd (mdb command)
: Browsing commandsxml_tmp_filename (mdb command)
: Browsing commands[1] We might eventually add support for ordinary “Make” programs, but currently only GNU Make is supported.