BlooP and FlooP

BlooP and FlooP (Bounded loop and Free loop) are simple programming languages designed by Douglas Hofstadter to illustrate a point in his book Gödel, Escher, Bach.[1] BlooP is a Turing-incomplete programming language whose main control flow structure is a bounded loop (i.e. recursion is not permitted[citation needed]). All programs in the language must terminate, and this language can only express primitive recursive functions.[2]

FlooP is identical to BlooP except that it supports unbounded loops; it is a Turing-complete language and can express all computable functions. For example, it can express the Ackermann function, which (not being primitive recursive) cannot be written in BlooP. Borrowing from standard terminology in mathematical logic,[3][4] Hofstadter calls FlooP's unbounded loops MU-loops. Like all Turing-complete programming languages, FlooP suffers from the halting problem: programs might not terminate, and it is not possible, in general, to decide which programs do.

BlooP and FlooP can be regarded as models of computation, and have sometimes been used in teaching computability.[5]

BlooP examples

[edit]

The only variables are OUTPUT (the return value of the procedure) and CELL(i) (an unbounded sequence of natural-number variables, indexed by constants, as in the Unlimited Register Machine[6]). The only operators are (assignment), + (addition), × (multiplication), < (less-than), > (greater-than) and = (equals).

Each program uses only a finite number of cells, but the numbers in the cells can be arbitrarily large. Data structures such as lists or stacks can be handled by interpreting the number in a cell in specific ways, that is, by Gödel numbering the possible structures.

Control flow constructs include bounded loops, conditional statements, ABORT jumps out of loops, and QUIT jumps out of blocks. BlooP does not permit recursion, unrestricted jumps, or anything else that would have the same effect as the unbounded loops of FlooP. Named procedures can be defined, but these can call only previously defined procedures.[7]

Factorial function

[edit]
DEFINE PROCEDURE FACTORIAL [N]: BLOCK 0: BEGIN         OUTPUT ⇐ 1;         CELL(0) ⇐ 1;         LOOP AT MOST N TIMES:         BLOCK 1: BEGIN                 OUTPUT ⇐ OUTPUT × CELL(0);                 CELL(0) ⇐ CELL(0) + 1;         BLOCK 1: END; BLOCK 0: END.

Subtraction function

[edit]

This is not a built-in operation and (being defined on natural numbers) never gives a negative result (e.g. 2 − 3 := 0). Note that OUTPUT starts at 0, like all the CELLs, and therefore requires no initialization.

DEFINE PROCEDURE MINUS [M,N]: BLOCK 0: BEGIN         IF M < N, THEN:         QUIT BLOCK 0;         LOOP AT MOST M + 1 TIMES:         BLOCK 1: BEGIN                 IF OUTPUT + N = M, THEN:                 ABORT LOOP 1;                 OUTPUT ⇐ OUTPUT + 1;         BLOCK 1: END; BLOCK 0: END.

FlooP example

[edit]

The example below, which implements the Ackermann function, relies on simulating a stack using Gödel numbering: that is, on previously defined numerical functions PUSH, POP, and TOP satisfying PUSH [N, S] > 0, TOP [PUSH [N, S]] = N, and POP [PUSH [N, S]] = S. Since an unbounded MU-LOOP is used, this is not a legal BlooP program. The QUIT BLOCK instructions in this case jump to the end of the block and repeat the loop, unlike the ABORT, which exits the loop.[3]

DEFINE PROCEDURE ACKERMANN [M, N]: BLOCK 0: BEGIN 	CELL(0) ⇐ M; 	OUTPUT ⇐ N; 	CELL(1) ⇐ 0; 	MU-LOOP: 	BLOCK 1: BEGIN 		IF CELL(0) = 0, THEN: 		BLOCK 2: BEGIN 			OUTPUT ⇐ OUTPUT + 1; 			IF CELL(1) = 0, THEN: ABORT LOOP 1; 			CELL(0) ⇐ TOP [CELL(1)]; 			CELL(1) ⇐ POP [CELL(1)]; 			QUIT BLOCK 1; 		BLOCK 2: END 		IF OUTPUT = 0, THEN: 		BLOCK 3: BEGIN 			OUTPUT ⇐ 1; 			CELL(0) ⇐ MINUS [CELL(0), 1]; 			QUIT BLOCK 1; 		BLOCK 3: END  		OUTPUT ⇐ MINUS [OUTPUT, 1]; 		CELL(1) ⇐ PUSH [MINUS [CELL(0), 1], CELL(1)]; 	BLOCK 1: END; BLOCK 0: END.

See also

[edit]

References

[edit]
  1. ^ Douglas Hofstadter (1979), Gödel, Escher, Bach, Basic Books, Chapter XIII.
  2. ^ Stanford Encyclopedia of Philosophy: Computability and Complexity
  3. ^ a b Hofstadter (1979), p. 424.
  4. ^ Thomas Forster (2003), Logic, Induction and Sets, Cambridge University Press, p. 130.
  5. ^ David Mix Barrington (2004), CMPSCI 601: Theory of Computation, University of Massachusetts Amherst, Lecture 27.
  6. ^ Hofstadter refers to these cells as a set of "auxiliary variables."
  7. ^ Hofstadter (1979), p. 413.
[edit]