SSA-based Compiler Design
Part I: Vanilla SSA
Chapter 1: Introduction
1.1 Definition of SSA
- Single Static Assignment form (SSA) is a naming convention for storage locations (variables) in low-level representations of computer programs. A program is defined to be in SSA form if each variable is a target of exactly one assignment statement in the program text.
- The term static indicates that SSA relates to properties and analysis of program text (code)
- The term single refers to the uniqueness property of variable names that SSA imposes
- The term assignment means variable definitions
- One important property that holds for all varieties of SSA is referential transparency: i.e., since there is only a single definition for each variable in the program text, a variable's value is independent of its position in the program.
1.2 Informal Semantics of SSA
- The Φ-function, also known as a "pseudo-assignment function" is special statement whose purpose is to merge values from different incoming paths, at control-flow merge points. If there are multiple Φ-functions at the head of a basic block, then these are executed in parallel, i.e., simultaneously not sequentially.
1.3 Comparison with Classical Data-Flow Analysis
- Data-flow analysis collects information about programs at compile time in order to make optimizing code transformations
- Static analysis captures program execution information by propagating abstract information, such as data-flow facts, using an operational representation of the program such as a control-flow graph (CFG).
Chapter 2: Properties and Flavours
2.1 Def-Use and Use-Def Chains
- Def-use chains are data structures that provide, for the single definition of a variable, the set of all its uses.
- Use-def chains (which under SSA consist of a single name) uniquely specify the definition that reaches the use.
- Copy folding removes any copy `a = b` by renaming subsequent (dominated) uses of `a` by `b`. It is usually coupled with the SSA renaming phase.
- A forward control-flow graph is an acyclic reduction of the CFG obtained by removing back edges.
2.2 Minimality
- A definition `D` of variable `v` reaches a point `p` in the CFG if there exists a path from `D` to `p` that does not pass through another definition of `v`.
- We say that a code has the single reaching-definition property iff no program point can be reached by two definitions of the same variable.
- Under the assumption that the single reaching-definition property is fulfilled, the minimality property states the minimality of the number of inserted Φ-functions.
2.3 Strict SSA Form and Dominance Property
- A procedure is defined as strict if every variable is defined before it is used along every path from the entry to the exit point; otherwise, it is non-strict.
- In a CFG, a basic block `n1` dominates basic block `n2` ("n1 dom n2") if every path in the CFG from the entry point to `n2` includes `n1`. By convention, every basic block in a CFG dominates itself.
- Basic block `n1` strictly dominates `n2` ("n1 sdom n2") if `n1` dominates `n2` and `n1` != `n2`.
- The dominance property states that each use of a variable is dominated by its definition. Under SSA, because there is only a single (static) definition per variable, strictness is equivalent to the dominance property.
- Minimal SSA form is a variant of SSA form that satisfies both the minimality and dominance properties.
- The immediate dominator ("idom") of a node `N` is the unique node that strictly dominates `N` but does not strictly dominate any other node that strictly dominates `N`.
- A variable's live range is the set of program points where it is live. In the general case, a variable is considered to be live at a program point if there exists a definition-free path to a use (upward-exposed use analysis). For strict programs, any program point from which you can reach a use without going through a definition is necessarily reachable from a definition.
- A dominator tree is a tree where the children of each node are those nodes it immediately dominates. For each variable, its live range is a sub-tree of the dominator tree.
- A chordal graph is one in which all cycles of four or more vertices have a chord, which is an edge that is not part of the cycle but connects two vertices of the cycle. Equivalently, every induced cycle in the graph should have exactly three vertices.
2.4 Pruned SSA form
- Pruned SSA form is a variant of Minimal SSA form with the addtional constraint that every "use point" for a given variable must be reached by exactly one definition, as opposed to all program points.
2.5 Conventional and Transformed SSA form
- A web is the maximum uions of def-use chains that have either a use or a def in common.
- We say that `x` and `y` are Φ-related to one another if they are referenced by the same Φ-function, i.e., if `x` and `y` are either parameters to or defined by the Φ-function.
- The transitive closure of Φ-relation defines an equivalence relation that partitions the variables defined locally in the procedure into equivalence classes, called Φ-webs.
- Conventional SSA form (C-SSA) is defined as SSA form for which each Φ-web is interference-free.
- A program is in Transformed SSA form (T-SSA) when some variables belonging to the same Φ-web interfere with one another, and is sometimes the result of running optimizations on a program in Conventional SSA form.
2.6 A Stronger Definition of Interference
- Naively, two variables are said to interfere if their live ranges intersect. For a SSA code with dominance property, two live ranges intersect iff one contains the definition of the other.
Chapter 3: Standard Construction and Destruction Algorithms
3.1 Construction
- SSA construction refers to the process of translating a non-SSA program into one that satisfies the SSA constraints.
- Φ-function insertion performs "live range splitting" to ensure that any use of a given variable `v` is reached by exactly one definition of `v`.
- Variable renaming assigns a unique variable name to each live range.
- Join Sets and Dominance Frontiers
- For a given set of nodes `S` in a CFG, the join set `J(S)` is the set of join nodes of `S`, i.e., nodes in the CFG that can be reached by two (or more) distinct elements of `S` using disjoint paths.
- The dominance frontier of a node `n`, `DF(n)`, is the border of the CFG region that is dominated by `n`.
- The iterated dominance frontier (DF+(n)) is simply the transitive closure of the dominance frontier.
- A DJ-graph (dominance-join graph) is the dominator tree of the CFG that makes the D-edges (dominance edges) augmented with J-edges (join edges) that correspond to all edges of the CFG whose source does not strictly dominate its destination.
- By definition, there is a DF-edge `(a,b)` between every CFG nodes `a`, `b`, such that `a` dominates `a` direct predecessor of `b`, but does not strictly dominate `b`. In other words, for each J-edge `(a,b)`, all ancestors of `a` (including `a`) that do not strictly dominate `b` have `b` in their dominance frontier.
- Variable Renaming
- The variable renaming phase associates each individual live range with a new variable name, also called a version.
3.2 Destruction
- SSA destruction, sometimes called "out-of-SSA translation", generally takes place after all SSA optimizations have been performed.
- A critical edge is an edge from a node with several direct successors to a node with several direct predecessors.
- A parallel copy represents a set of copies that have to be executed in parallel.
3.3 SSA Property Transformations
- Semi-pruned SSA form is minimal SSA where Φ-functions are only inserted for non-local (w.r.t. basic blocks) variabled.
3.4 Further Reading
- A CFG is reducible if there are no jumps into the middle of loops from the outside, so the only entry to a loop is through its header. A reducible CFG will collapse to a single node when it is transformed using repeated applications of T1/T2 rewrite rules.
Chapter 4: Advanced Construction Algorithms for SSA
4.4 Computing Iterated Dominance Frontier Using Loop Nesting Forests
Chapter 5: SSA Reconstruction
Chapter 6: Functional Representations of SSA
6.1 Low-Level Functional Programming Representations
- Variable Assignment Versus Binding
- A language construct provided by almost all functional languages is the let-binding: `let x = e1 in e2 end`. The effect of this expression is to evaluate `e1` and bind the resulting value to variable `x` for the duration of the evaluation of `e2`.
- The code affected by the variable bound by the let-expression is that variable's static scope.
- In functional languages, the point of definition for a variable `x` is given by the nearest enclosing binding of `x`.
- Occurences of variables in an expression that are not enclosed by a binding are called free.
- Functional languages enjoy a strong notion of referential transparency: the choice of `x` as the variable holding the result of `e1` depends only on the free varaibles of `e2`.
- The semantics-preserving renaming of bound variables is called α-renaming.
- A consequence of referential transparency, and thus a property typically enjoyed by functional languages is compositional equational reasoning: the meaning of a piece of code is only dependent on its free variables and can be calculated from the meaning of its subexpressions.
- Control Flow: Continuations
- Continuation-passing style (CPS) is a program representation routinely used in compilers for functional languages. Satisfying a roughly similar purpose as return addresses or function pointers in imperative languages, a continuation specifies how the execution should proceed once the evaluation of the current code fragment has terminated. Syntactically, continuations are expressions that may occur in functional positions, i.e., are typically applied to argument expressions.
- The role of the formal parameter of a continuation is exactly that of a Φ-function: to unify the arguments stemming from various call sites by binding them to a common name for the duration of the ensuing code fragment.
- The scopes of the bindings for the vars mapping to the continuation's formal parameter coincide with the dominance regions of the indentically named imperative variables: both terminate at the point of function invocation/jump to the control-flow merge point.
- Control Flow: Direct Style
- An alternative to the explicit passing of continuation terms via additional function arguments is the direct style, in which we represent code as a set of locally named tail-recursive functions, for which the last operation is a call to a function, eliminating the need to save the return address.
- Let-Normal Form
- A program is in let-normal form when each intermediate result is explicitly named by a variable, and function arguments are names or constants.
Correspondence pairs between functional form and SSA:
Imperative/SSA concept Functional concept Assignment (point of definition) Variable binding in `let` Variable renaming α-renaming Unique association of defs to uses Unique association of binding occurences to uses Φ-function (point of definition) Formal parameter of continuation/local function Dominance region Lexical scope of bound variable Direct control-flow successor relationship Immediate subterm relationship Number of Φ-functions at beginning of bi Arity of function fi Distinctness of LHS-variables in Φ-block of bi Distinctness of formal parameters of fi Arity of Φ-functions in block bi Number of call sites of function fi Addition/removal of Φ-function Parameter lifting/dropping Reordering according to dominator tree structure Block floating/sinking Dominator tree Potential nesting structure
6.2 Functional Construction and Destruction of SSA
- Initial Construction Using Liveness Analysis
- A function declaration is closed if all the free variables in its body is contained in its formal parameter
- λ-Dropping
- λ-dropping, the inverse of the more widely known transformation λ-lifting, is a technique for eliminating superfluous Φ-functions corresponding to situations where all call sites to a function pass idential arguments. λ-dropping consists of two phases, block sinking and parameter dropping.
- 6.2.2.1 Block Sinking
- Block sinking analyzes the static call structure to identify which function definitions may be moved inside each other. If applied aggresively, block sinking abounts to making the entire dominance tree structure explicit in the program representation.
- 6.2.2.2 Parameter Dropping
- Parameter dropping removes superfluous parameters based on the syntactic scope structure.
- Destruction of SSA
- The task of SSA destruction amounts to converting a functional program with arbitrary argument lists into one whose aregument lists and formal parameter lists coincide for each function.
6.4 Further Reading and Concluding Remarks
- Occasionally, the term direct style refers to the combination of tail-recursive functions and let-normal form, and the conditions on the latter notion are strengthened so that only variables may appear as branch conditions. Variations of this discipline include administrative normal form (A-normal, ANF).
Part II: Analysis
Chapter 7: Introduction
Chapter 8: Propagating Information Using SSA
8.1 Preliminaries
- Data-flow analysis derives information from certain interesting program properties that may help to optimize the program. Typical examples of interesting properties are: the set of live variables at a given program point, the particular constant value a variable may take, or the set of program points that are reachable at runtime.
- Formally, a data-flow problem can be specified using a monotone framework that consists of:
- A complete lattice representing the property space
- A set of transfer functions modelling the effect of individual operations on the property space
- A flow graph representing the control flow of the input program
- A complete lattice is a class of partially ordered sets where all subsets have a least upper bound as well as a greatest lower bound.
- In the context of program analysis, set union is often called the join operator and set intersection the meet operator.
- Complete lattices have two distinguished elements, the least element and the greatest element, often denoted by ⊥ ("bottom") and ⊤ ("top"), respectively.
- An ascending chain is a totally ordered subset of a complete lattice. An analagous definition can be given for descending chains.
- Analysis that reverses the flow graph to propagate information is known as backwards analysis, while analysis that uses the regular flow graph is known as forward analysis.
- Every operation is mapped to a transfer function, which transforms the information available from the "in" set of the flow graph node of the operation and stores the result in the corresponding "out" set.
- Solving Data-Flow Problems
- A monotone framework describes a set of data-flow equations whose solution will ultimately converge to the solution of the data-flow analysis.
- A very popular and intuitive way to solve a set of data-flow equations is to compute the maximal (minimal) fixed point (MFP) using an iterative work list algorithm which continually processes a work list of flow graph edges until the data-flow information stabilizes, and the work list is empty.
- The height of a lattice is defined by the length of its longest chain.
8.2 Data-Flow Propagation Under SSA Form
- Sparse Data-Flow Propagation
- Similar to monotone frameworks for traditional data-flow analysis, frameworks for sparse data-flow propagation under SSA form can be defined using:
- A complete lattice representing the property space
- A set of transfer functions for the operations of the program
- A control-flow graph capturing the execution flow of the program
- An SSA graph representing data dependencies
- A CFG edge is considered to be executable if data-flow analysis cannot rule out that a program execution traversing the edge exists.
- Similar to monotone frameworks for traditional data-flow analysis, frameworks for sparse data-flow propagation under SSA form can be defined using:
Chapter 9: Liveness
9.1 Definitions
- A variable is considered to be live at a given program point when its value will be used in the future of any dynamic execution. Put another way, a variable is said to be live at all program points along the paths on the CFG from the definition of the var to its use.
- For a CFG node `q`, representing an instruction or a basic block, a variable `v` is live-in at `q` if there is a path, not containing the definition of `v`, from `q` to a node where `v` is used (including `q` itself). It is live-out at `q` if it is live-in at some direct successor of `q`.
- The computation of live-in and live-out sets at the entry and exit of basic blocks is usually termed liveness analysis.
- The live range of a variable specifies the set of program points where that variable is live.
9.2 Data-Flow Approaches
- A use is upward-exposed in a block when there is no local definition preceding it, i.e., the live range "escapes" the block at the top.
- Liveness Sets on Irreducible Flow Graphs
- The outermost loop excluding (OLE) node `t` with regards to `s` is the largest loop containing `t` but not `s`.
- For computing liveness in an irreducible CFG, it is necessary to use instead the modified-forward CFG which redirects any edge `s -> t` to `t.OLE(s)` (the outermost loop excluding `t` with regards to `s`).
Chapter 10: Loop Tree and Induction Variables
10.1 Part of the CFG and Loop Tree Can Be Exposed Through the SSA
- The classical SSA representation is not sufficient to represent the semantics of the original program.
- Loop-closed SSA form adds an extra variable at the end of a loop for each variable defined in a loop and used after the loop.
- 10.1.2 Natural Loop Structures on the SSA
- A loop-Φ node, denoted `i = Φ(x,j)` or `i = Φentry(x,j)`, has an argument that contains a self-reference `j` and an invariant argument `x`.
- A close-Φ node, denoted `k = Φexit(i), captures the last value of a name defined in a loop.
- 10.1.3 Improving the SSA Pretty Printer for Loops
- To capture the semantics of the computation of the loop, we have to specify the loop exit condition in the close-Φ node when we exit the loop, so as to be able to derive which value will be available at the end of the loop: `Φexit(i > N, i)`.
10.2 Analysis of Induction Variables
- 10.2.2 Translation to a Chain of Recurrences
- The syntax of a polynomial chain of recurrences is `{base,+,step}x`, where `base` and `step` may be arbitrary expressions or constants, and `x` is the loop with which the sequence is associated.
- 10.2.3 Instantiation of Symbols and Region Parameters
- In some cases, it becomes necessary to leave in a symbolic form every definition outside a given region, and these symbols are then called parameters of the region.
- 10.2.4 Number of Iterations and Computation of the End-of-Loop Value
- One of the important static analyses for loops is to evaluate their trip count, i.e., the number of times the loop body is executed before the exit condition b4ecomes `true`.
- In the common case where the loop-exit condition is a comparison of an induction variable against some constant, parameter, or other induction variables, the number of iterations is computed as the minimum solution of a polynomial inequality with integer solutions, also called a Diophantine inequality.
- Scalar evolution analysis (SCEV) is an anlysis of the change of scalar quantities in a program, often loop induction variables.
Chapter 11: Redundancy Elimination
- Redundancy elimination is an optimization where the result of a computation that is performed multiple times yielding the same result is saved and reused in place of re-doing the computation.
- A computation is fully redundant if the computation has occurred earlier regardless of the flow of control.
- The elimination of full redundancy is also called common subexpression elimination.
- A computation is partially redundant if the computation has occurred only along certain paths.
- In syntax-driven redundancy elimination algorithms, two computations are the same if they are the same operation applied to the same operands that are program variables or constants.
- In value-based redundancy elimination algorithms, redundancy arises whenever two computations yield the same value.
- Expressions compute to values without generating any side effect.
- Statements have side effects as they potentially alter memory contents or control flow, and are not candidates for redundancy elimination.
- In a maximal expression tree form of program representation, large expression trees are represented as is without having to specify any assignments to temporaries for storing the intermediate values of the computation.
- The opposite of maximal expression tree form is triplet form in which each arithmetic operation always defines a temporary.
11.1 Why Partial Redundancy Elimination and SSA Related
- A redundancy is strictly partial if insertions are required to eliminate the redundancy. In contrasts, a full redundancy can be deleted without requiring any insertion.
- Partial redundancy elimination (PRE) subsumes global common subexpressions and loop-invariant code motion.
- In the region of the control-flow graph dominated by the occurrence of an expression, any further occurrence of that expression is fully redundant, assuming all no variables participating in the expression are modified. Following the program flow, once we are past the dominance frontiers, any further occurence of the expression is partially redundant.
- To expose potential partial redundancies, we introduce operator Φ (uppercase) at the dominance frontiers of the occurrences, which has the effect of factoring the redundancy edges at merge points in the control-flow graph. The resulting factored redundancy graph (FRG) can be regarded as the SSA form for expressions.
- There are three kinds of occurrences of the expression in the program:
- the occurrences in the original program, called real occurrences
- the inserted Φs, called Φ-def
- the use operands of the Φs, called Φ-use, which are regarded as occurring at the ends of the direct predecessor blocks of their corresponding edges
11.2 How SSAPRE Works
- The placement of an expression denotes the set of points in the optimized program where that expression's computation occurs.
- The original computation points of an expression refers to the points in the original program where that expression's computation took place.
11.3 Speculative PRE
- Speculative code motion suppresses redundancy in some paths at the expense of another path where the computation is added but the result is unused. As long as the paths that are burdened with more computations are executed less frequently than the paths where the reundant computations are avoided, a net gain in program performance can be achieved.
- Without profile data, speculative PRE can be conservatively performed by restricting it to loop-invariant code motion.
- Computations such as indirect loads and divides are called dangerous computations because they may generate a fault. Dangerous computations in general should not be speculated.
- Dangerous computations are sometimes protected by tests (or guards) placed in the code by the programmers or automatically generated by language compilers. When such a test occurs in the program, we say the dangerous computation is safety-dependent on the control-flow point that establishes its safety.
- At the points in the program where its safety dependence is satisfied, a dangerous instruction is fault-safe and can still be speculated.
11.4 Register Promotion via PRE
- Register promotion allocates variables to pseudo-registers (also called symbolic registers, virtual registers, or temporaries) whenever possible and optimize the placement of loads and stores that transfer their value between memory and registers.
- Register allocation fits the unlimited number of pseudo-registers to the limited number of real registers (also called physical registers).
- 11.4.1 Register Promotion as Placement Optimization
- Register promotion can be modelled as two separate problems: PRE of loads (LPRE), followed by PRE of stores (SPRE). Combing the effects of the PRE of loads and stores results in optimal placements of loads and stores while minimizing the live ranges of the pseudo-registers, by virtue of the computational and lifetime optimality of the PRE algorithm
- 11.4.3 Store Placement Optimization
- In static single use (SSU) form, use-def edges are factored at divergence points in the control-flow graph using σ-functions. Each use of a variable establishes a new version (we say the load uses the version), and every store reaches exactly one load.
- SSUPRE is a store PRE algorithm, which is made up of the corresponding steps in SSAPRE.
- A σ is upsafe if it is fully available.
Part III: Extensions
Chapter 12: Introduction
12.1 Static Single Information Form
- The process of providing distinct names for different uses of a variable in different control flow branches is called live range splitting.
- Range analysis is the process of leveraging information from branch conditions to estimate the interval of values that an integer variable may assume throughout the execution of a program.
- Null-pointer analysis is the process of leveraging information from prior uses of reference variables.
12.3 Gated SSA Forms
- Data flow within a program can be represented by a data-flow graph (DFG).
- Data dependencies within a program can be represented by a data dependence graph (DDG).
- Control dependencies within a program can be represented by a control dependence graph (CDG).
- Data and control dependencies can be represented together in a program dependence graph (PDG).
12.4 Psi-SSA Form
- A predicated operation is an alternative representation of a fine-grained control-flow structure, often obtained by using the well-known if-conversion transformation.
- A ψ-function performs merge functions for predicated operations, analagous to the merge performed by φ-functions at join points in vanilla SSA.
12.5 Hashed SSA Form
- Variable versions that have no "real" occurrence in the program, i.e., do not appear in a real instruction outside of a φ-, μ-, or χ-function, can be grouped together into a single version called the zero version of the variable.
Chapter 13: Single Static Information Form
- The facts discovered to be true about a program by data-flow analysis is called information.
- If the intermediate representation of a program guarantees that some information about a variables holds at every program point where this variable is alive, then we say that the program representation provides the Static Single Information (SSI) property.
- The Extended-SSA (e-SSA) form provides the SSI property to analyses that take information from the definition site of variables, and also from conditional tests where these variables are used.
- The Static Single Information (SSI) form provides the SSI property to data-flow analysese that extract information from the definition sites of variables, and from the last use sites.
13.1 Static Single Information
- 13.1.1 Sparse Analysis
- An analysis is called dense if it maintains information at each program point.
- A transfer function is an identity if it does not add any new information to the state of the data being maintained.
- The goal of sparse data-flow analysis is to shortcut the identity transfer functions by grouping contiguous program points bound by identities into larger regions.
- 13.1.2 Partitioned Lattice per Variable (PLV) Problems
- A Partitioned Lattice per Variable (PLV) problem is a non-relational data-flow analysis problem where information is bound to pairs of program variables and program points.
- A Partitioned Variable Problem (PVP) is a subclass of PLV problems which can be decomposed into a set of sparse data-flow problems–usually one per variable–each independent of the others.
- 13.1.4 Special Instructions Used to Split Live Ranges
- A σ-function is the dual of a φ-function, and represents a branch-point in a similar way that a φ-function represents a join-point.
- A parallel copy is a copy that must be done in parallel with another instruction.
- An interior node is a program point that has a unique direct predecessor and a unique direct successor.
- A join is a program point that has one direct successor and multiple direct predecessors.
- A branch is a program point that has one direct predecessor and multiple direct successors.
Chapter 14: Graphs and Gating Functions
14.1 Data-Flow Graphs
- A data-flow graph (DFG) is a directed graph
G=(V,E)
where the edgesE
represent the flow of data from the result of one instruction to the input of another.
14.2 The SSA Graph
- An SSA graph is a literal representation of SSA; it consists of vertices that represents instructions or φ-functions, and directed edges that connect uses to definitions of values. The outgoing edges of a vertex represent the arguments required for that instruction, and the ingoing edge(s) to a vertex represent(s) the propagation of the instruction's result(s) after they have been computed.
- A graph is demand-based if in order to process an edge (compute an instruction), the incoming edges must be processed (the operands must be computed).
14.3 Program Dependence Graph
- A program dependence graph (PDG) is a directed graph
G=(V,E)
where nodesV
are statements, predicate expressions, or region nodes, and edgesE
represent either control or data dependencies. Thus, the set of all edgesE
has two distinct subsets: the control dependency subgraph and the data dependence subgraph. - A node `w` is said to be control dependent on edge `(u,v)` if `w` post-dominates `v` and `w` does not strictly post-dominate `u`.
- A merge node is an upwards reaching leaf in the DAG for a basic block.
14.4 Gating Functions and GSA
- Gated single assignment form (GSA, or sometimes, "gated SSA") is an extension of SSA with gating functions.
- A gating function is a directly interpretable version of a φ-node and replaces them in the representation.
- The φif function explictly represents the condition that determines which φ value to select. A φif function of the form φif(p,v1, v2) has `p` as a predicate, and `v1` and `v2` as values to be selected if the predicate evaluates to true or false, respectively.
- The φentry function is inserted at loop headers to select the initial and loop caried values. A φentry function of the form φentry(vinit, viter) has `vinit` as the initial input value for the loop, and `viter` as the iterative input. We replace φ-functions at loop headers with φentry functions.
- The φexit function determines the value of a variable when a loop terminates. A φexit function of the form φexit(p, vexit) has `p` as predicate and `vexit` as the definition reaching beyond the loop.
14.5 Value State Dependence Graph
- 14.5.1 Definition of the VSDG
- A value state dependence graph (VSDG) is a directed graph consisting of operation nodes, loop, and merge nodes together with value and state dependency edges.
- State dependency edges that are incrementally added to the VSDG until it corresponds to a unique CFG are called serializing edges.
- A γ-node `γ(C : p, T : vtrue, F : vfalse)` evaluates the condition dependency `p` and returns the value of `vtrue` if `p` is true, otherwise `vfalse`.
- A θ-node `θ(C : p, I : vinit, R : vreturn, L : viter, X : vexit)` sets its internal value to initial value `vinit` and then, while contition `p` holds true, sets `viter` to the current internal value and updates the internal value with the repeat value `vreturn`. When `p` evaluates to false, computation ceases, and the last internal value is returned through `vexit`.
- The internal value of a θ-node refers to the values of its variables during loop execution.
- 14.5.2 Dead Node Elimination with the VSDG
- Dead node elimination is the combination of dead code elimination and unreachable code elimination.
- A dead node is a node that is not post-dominated by the exit node N∞.
Chapter 15: Psi-SSA Form
- ψ-SSA Form is an extension of SSA form where control-flow information (conditionals) is encoded in the SSA statements.
15.1 Definition and Construction
- A ψ-function a0 = ψ(p1?a1, …, pi?ai, …, pn?an) defines one variable, a0, and takes a number of arguments ai; each argument ai is associated with a predicate pi. In the notation, the predicate pi will be omitted if pi ≡ true.
15.3 Psi-SSA Algorithms
- ψ-Inlining recursively replaces in a ψ-function an argument ai that is defined on another ψ-function by the arguments of this other ψ-function.
- ψ-Reduction removes from a ψ-function an argument ai whose value will always be overridden by arguments on its right in the argument list.
- ψ-Projection creates from a ψ-function a new ψ-function on a restricted predicate, say p. In this new ψ-function, an argument ai initially guarded by pi will be guarded by the conjunction pi ∧ p.
- ψ-Permutation changes the order of the arguments in a ψ-function.
- ψ-Promotion changes one of the predicates used in a ψ-function to a larger predicate.
15.4 Psi-SSA Destruction
- A ψ-φ-web is a non-empty, minimal set of variables such that if two variables are referenced on the same φ- or ψ-function, then they are in the same ψ-φ-web.
- The property of the ψ-C-SSA form is that the renaming into a single variable of all variables that belong to the same ψ-φ-web, and the removal of the ψ- and φ-functions, results in a program with the same semantics as the original program.
- 15.4.1 Psi-Normalize
- In normalized-ψ, two properties hold: 1) the order of the arguments is, from left to right, equal to the order of their definitions, from top to bottom, in the control-flow dominance tree, and 2) the predicate associated with each argument is equal to the predicate used on the unique definition of this argument.
Chapter 16: Hashed SSA Form
- Hashed SSA (HSSA) is an SSA extension that can effectively represent how aliasing relations affect a program in SSA form.
16.1 SSA and Aliasing: μ- and χ-Functions
- A μ-function is used to indicate that the variable which is its argument is potentially used.
- A χ-function is used to indicate that the variable which is its argument is potentially assigned.
16.2 Introducing "Zero Version" to Limit Version Explosion
- The zero version is a special version assigned to variables after χ-functions to improve the efficiency of HSSA by avoiding the creation of new variable versions that would be useless due to the uncertainty introduced by a χ-function.
16.3 SSA for Indirect Memory Operations: Virtual Variables
- A virtual variable is an abstraction of a memory area which is used to represent the target locations of indirect memory operations.
- Assignment factoring is the process of replacing multiple use-def chains belonging to different virtual variables with one use-def chain that encompasses more nodes and thus more versions.
Chapter 17: Array SSA Form
- Array SSA form captures element-level data-flow information for array variables and coincides with standard SSA form when applied to scalar variables.
17.1 Array SSA Form
- An iteration vector i = (i1, …, in) is an n-tuple of iteration numbers, one for each enclosing loop.
- An @-variable identifies the most iteration vector at which definition Aj was executed.
- A control-φ-function is a new name for the φ-function introduced in vanilla SSA form, to distinguish it from definition-φ-functions.
- A definition-φ-function is a φ-function which is inserted immediately after a non-killing definition of an array and allows the preservation of non-killed elements of that array.
- Full Array SSA form is Array SSA form with all of its features, including @-variables, which allows for runtime evaluation of φ-functions.
- Partial Array SSA form is an approximation of Full Array SSA form used for static analysis which does not require the inclusion of @-variables.