Macro

A macro in computer science is an abstraction, that defines how a certain input pattern is replaced by an output pattern according to a defined set of rules.

The term originated with macro-assemblers, where the idea was to write a single statement that appeared like an instruction in the assembly language (a macro-instruction). When the program was assembled, the macro-instruction was expanded into a sequence of real instructions that would then be assembled.

In this way the complexity of the sequence of real instructions would be hidden, and the abstraction of the macro simplified coding and understanding code.

Complex macro-assemblers offered sophisticated ways to add parameters to macros, so that the macro would expand in different ways according to the values of the parameters.

For the rest of the Wikipedia entry on the above term, go here.