Skip to content

An evaluator specification is either a newly created evaluator instance or an evaluator that has been defined previously. This page describes how one can specify a new evaluator instance. For re-using evaluators, see OptionSyntax#Evaluator_Predefinitions.

If the evaluator is a heuristic, definitions of properties in the descriptions below:

  • admissible: h(s) <= h*(s) for all states s
  • consistent: h(s) <= c(s, s') + h(s') for all states s connected to states s' by an action with cost c(s, s')
  • safe: h(s) = infinity is only true for states with h*(s) = infinity
  • preferred operators: this heuristic identifies preferred operators

This feature type can be bound to variables using let(variable_name, variable_definition, expression) where expression can use variable_name. Predefinitions using --evaluator, --heuristic, and --landmarks are automatically transformed into let-expressions but are deprecated.

Additive heuristic#

add(verbosity=normal, transform=no_transform(), cache_estimates=true)
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no
  • consistent: no
  • safe: yes for tasks without axioms
  • preferred operators: yes

Blind heuristic#

Returns cost of cheapest action for non-goal states, 0 for goal states

blind(verbosity=normal, transform=no_transform(), cache_estimates=true)
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Context-enhanced additive heuristic#

cea(verbosity=normal, transform=no_transform(), cache_estimates=true)
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no
  • consistent: no
  • safe: no
  • preferred operators: yes

Additive Cartesian CEGAR heuristic#

See the paper introducing counterexample-guided Cartesian abstraction refinement (CEGAR) for classical planning:

and the paper showing how to make the abstractions additive:

For more details on Cartesian CEGAR and saturated cost partitioning, see the journal paper

For a description of the incremental search, see the paper

Finally, we describe advanced flaw selection strategies here:

  • David Speck and Jendrik Seipp.
    New Refinement Strategies for Cartesian Abstractions.
    In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. to appear. AAAI Press, 2022.

    cegar(subtasks=[landmarks(order=random), goals(order=random)], max_states=infinity, max_transitions=1M, max_time=infinity, pick_flawed_abstract_state=batch_min_h, pick_split=max_cover, tiebreak_split=max_refined, memory_padding=500, dot_graph_verbosity=silent, random_seed=-1, max_concrete_states_per_abstract_state=infinity, max_state_expansions=1M, use_general_costs=true, verbosity=normal, transform=no_transform(), cache_estimates=true)

  • subtasks (list of SubtaskGenerator): subtask generators

  • max_states (int [1, infinity]): maximum sum of abstract states over all abstractions
  • max_transitions (int [0, infinity]): maximum sum of state-changing transitions (excluding self-loops) over all abstractions
  • max_time (double [0.0, infinity]): maximum time in seconds for building abstractions
  • pick_flawed_abstract_state ({first, first_on_shortest_path, random, min_h, max_h, batch_min_h}): flaw-selection strategy
    • first: Consider first encountered flawed abstract state and a random concrete state.
    • first_on_shortest_path: Follow the arbitrary solution in the shortest path tree (no flaw search). Consider first encountered flawed abstract state and a random concrete state.
    • random: Collect all flawed abstract states and then consider a random abstract state and a random concrete state.
    • min_h: Collect all flawed abstract states and then consider a random abstract state with minimum h value and a random concrete state.
    • max_h: Collect all flawed abstract states and then consider a random abstract state with maximum h value and a random concrete state.
    • batch_min_h: Collect all flawed abstract states and iteratively refine them (by increasing h value). Only start a new flaw search once all remaining flawed abstract states are refined. For each abstract state consider all concrete states.
  • pick_split ({random, min_unwanted, max_unwanted, min_refined, max_refined, min_hadd, max_hadd, min_cg, max_cg, max_cover}): split-selection strategy
    • random: select a random variable (among all eligible variables)
    • min_unwanted: select an eligible variable which has the least unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • max_unwanted: select an eligible variable which has the most unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • min_refined: select an eligible variable which is the least refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • max_refined: select an eligible variable which is the most refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • min_hadd: select an eligible variable with minimal h^add(s_0) value over all facts that need to be removed from the flaw state
    • max_hadd: select an eligible variable with maximal h^add(s_0) value over all facts that need to be removed from the flaw state
    • min_cg: order by increasing position in partial ordering of causal graph
    • max_cg: order by decreasing position in partial ordering of causal graph
    • max_cover: compute split that covers the maximum number of flaws for several concrete states.
  • tiebreak_split ({random, min_unwanted, max_unwanted, min_refined, max_refined, min_hadd, max_hadd, min_cg, max_cg, max_cover}): split-selection strategy for breaking ties
    • random: select a random variable (among all eligible variables)
    • min_unwanted: select an eligible variable which has the least unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • max_unwanted: select an eligible variable which has the most unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • min_refined: select an eligible variable which is the least refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • max_refined: select an eligible variable which is the most refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • min_hadd: select an eligible variable with minimal h^add(s_0) value over all facts that need to be removed from the flaw state
    • max_hadd: select an eligible variable with maximal h^add(s_0) value over all facts that need to be removed from the flaw state
    • min_cg: order by increasing position in partial ordering of causal graph
    • max_cg: order by decreasing position in partial ordering of causal graph
    • max_cover: compute split that covers the maximum number of flaws for several concrete states.
  • memory_padding (int [0, infinity]): amount of extra memory in MB to reserve for recovering from out-of-memory situations gracefully. When the memory runs out, we stop refining and start the search. Due to memory fragmentation, the memory used for building the abstraction (states, transitions, etc.) often can't be reused for things that require big continuous blocks of memory. It is for this reason that we require a rather large amount of memory padding by default.
  • dot_graph_verbosity ({silent, write_to_console, write_to_file}): verbosity of printing/writing dot graphs
    • silent:
    • write_to_console:
    • write_to_file:
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
  • max_concrete_states_per_abstract_state (int [1, infinity]): maximum number of flawed concrete states stored per abstract state
  • max_state_expansions (int [1, infinity]): maximum number of state expansions per flaw search
  • use_general_costs (bool): allow negative costs in cost partitioning
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Causal graph heuristic#

cg(max_cache_size=1000000, verbosity=normal, transform=no_transform(), cache_estimates=true)
  • max_cache_size (int [0, infinity]): maximum number of cached entries per variable (set to 0 to disable cache)
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no
  • consistent: no
  • safe: no
  • preferred operators: yes

FF heuristic#

ff(verbosity=normal, transform=no_transform(), cache_estimates=true)
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no
  • consistent: no
  • safe: yes for tasks without axioms
  • preferred operators: yes

Goal count heuristic#

goalcount(verbosity=normal, transform=no_transform(), cache_estimates=true)
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: ignored by design
  • conditional effects: supported
  • axioms: supported

Properties:

  • admissible: no
  • consistent: no
  • safe: yes
  • preferred operators: no

h^m heuristic#

hm(m=2, verbosity=normal, transform=no_transform(), cache_estimates=true)
  • m (int [1, infinity]): subset size
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: ignored
  • axioms: ignored

Properties:

  • admissible: yes for tasks without conditional effects or axioms
  • consistent: yes for tasks without conditional effects or axioms
  • safe: yes for tasks without conditional effects or axioms
  • preferred operators: no

Max heuristic#

hmax(verbosity=normal, transform=no_transform(), cache_estimates=true)
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: yes for tasks without axioms
  • consistent: yes for tasks without axioms
  • safe: yes for tasks without axioms
  • preferred operators: no

Landmark cost partitioning heuristic#

Landmark progression is implemented according to the following paper:

  • Clemens Büchner, Thomas Keller, Salomé Eriksson and Malte Helmert.
    Landmarks Progression in Heuristic Search.
    In Proceedings of the Thirty-Third International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 70-79. AAAI Press, 2023.

    landmark_cost_partitioning(lm_factory, pref=false, prog_goal=true, prog_gn=true, prog_r=true, verbosity=normal, transform=no_transform(), cache_estimates=true, cost_partitioning=uniform, scoring_function=max_heuristic_per_stolen_costs, alm=true, lpsolver=cplex, random_seed=-1)

  • lm_factory (LandmarkFactory): the set of landmarks to use for this heuristic. The set of landmarks can be specified here, or predefined (see LandmarkFactory).

  • pref (bool): enable preferred operators (see note below)
  • prog_goal (bool): Use goal progression.
  • prog_gn (bool): Use greedy-necessary ordering progression.
  • prog_r (bool): Use reasonable ordering progression.
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • cost_partitioning ({optimal, uniform, opportunistic_uniform, greedy_zero_one, saturated, canonical, pho}): strategy for partitioning operator costs among landmarks
    • optimal: use optimal (LP-based) cost partitioning
    • uniform: partition operator costs uniformly among all landmarks achieved by that operator
    • opportunistic_uniform: like uniform, but order landmarks and reuse costs not consumed by earlier landmarks
    • greedy_zero_one: order landmarks and give each landmark the costs of all the operators it contains
    • saturated: like greedy_zero_one, but reuse costs not consumed by earlier landmarks
    • canonical: canonical heuristic over landmarks
    • pho: post-hoc optimization over landmarks
  • scoring_function ({max_heuristic, min_stolen_costs, max_heuristic_per_stolen_costs}): metric for ordering abstractions/landmarks
    • max_heuristic: order by decreasing heuristic value for the given state
    • min_stolen_costs: order by increasing sum of costs stolen from other heuristics
    • max_heuristic_per_stolen_costs: order by decreasing ratio of heuristic value divided by sum of stolen costs
  • alm (bool): use action landmarks
  • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
    • cplex: commercial solver by IBM
    • soplex: open source solver by ZIB
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

Note: to use an LP solver, you must build the planner with LP support. See build instructions.

Usage with A*: We recommend to add this heuristic as lazy_evaluator when using it in the A* algorithm. This way, the heuristic is recomputed before a state is expanded, leading to improved estimates that incorporate all knowledge gained from paths that were found after the state was inserted into the open list.

Consistency: The heuristic is consistent along single paths if it is set as lazy_evaluator; i.e. when expanding s then we have h(s) <= h(s')+cost(a) for all successors s' of s reached with a. But newly found paths to s can increase h(s), at which point the above inequality might not hold anymore.

Optimal Cost Partitioning: To use cost_partitioning=optimal, you must build the planner with LP support. See build instructions.

Preferred operators: Preferred operators should not be used for optimal planning. See Landmark sum heuristic for more information on using preferred operators; the comments there also apply to this heuristic.

Supported language features:

  • action costs: supported
  • conditional_effects: supported if the LandmarkFactory supports them; otherwise not supported
  • axioms: not allowed

Properties:

  • preferred operators: yes (if enabled; see pref option)
  • admissible: yes
  • consistent: no; see document note about consistency
  • safe: yes

Landmark sum heuristic#

Landmark progression is implemented according to the following paper:

  • Clemens Büchner, Thomas Keller, Salomé Eriksson and Malte Helmert.
    Landmarks Progression in Heuristic Search.
    In Proceedings of the Thirty-Third International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 70-79. AAAI Press, 2023.

    landmark_sum(lm_factory, pref=false, prog_goal=true, prog_gn=true, prog_r=true, verbosity=normal, transform=no_transform(), cache_estimates=true)

  • lm_factory (LandmarkFactory): the set of landmarks to use for this heuristic. The set of landmarks can be specified here, or predefined (see LandmarkFactory).

  • pref (bool): enable preferred operators (see note below)
  • prog_goal (bool): Use goal progression.
  • prog_gn (bool): Use greedy-necessary ordering progression.
  • prog_r (bool): Use reasonable ordering progression.
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Note on performance for satisficing planning: The cost of a landmark is based on the cost of the operators that achieve it. For satisficing search this can be counterproductive since it is often better to focus on distance from goal (i.e. length of the plan) rather than cost. In experiments we achieved the best performance using the option 'transform=adapt_costs(one)' to enforce unit costs.

Preferred operators: Computing preferred operators is only enabled when setting pref=true because it has a nontrivial runtime cost. Using the heuristic for preferred operators without setting pref=true has no effect. Our implementation to compute preferred operators based on landmarks differs from the description in the literature (see reference above).The original implementation computes two kinds of preferred operators:

  1. If there is an applicable operator that reaches a landmark, all such operators are preferred.
  2. If no such operators exist, perform an FF-style relaxed exploration towards the nearest landmarks (according to the landmark orderings) and use the preferred operators of this exploration.

Our implementation only considers preferred operators of the first type and does not include the second type. The rationale for this change is that it reduces code complexity and helps more cleanly separate landmark-based and FF-based computations in LAMA-like planner configurations. In our experiments, only considering preferred operators of the first type reduces performance when using the heuristic and its preferred operators in isolation but improves performance when using this heuristic in conjunction with the FF heuristic, as in LAMA-like planner configurations.

Supported language features:

  • action costs: supported
  • conditional_effects: supported if the LandmarkFactory supports them; otherwise ignored
  • axioms: ignored

Properties:

  • preferred operators: yes (if enabled; see pref option)
  • admissible: no
  • consistent: no
  • safe: yes except on tasks with axioms or on tasks with conditional effects when using a LandmarkFactory not supporting them

Landmark-cut heuristic#

lmcut(verbosity=normal, transform=no_transform(), cache_estimates=true)
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: no
  • safe: yes
  • preferred operators: no

Merge-and-shrink heuristic#

This heuristic implements the algorithm described in the following paper:

For a more exhaustive description of merge-and-shrink, see the journal paper

The following paper describes how to improve the DFP merge strategy with tie-breaking, and presents two new merge strategies (dyn-MIASM and SCC-DFP):

Details of the algorithms and the implementation are described in the paper

  • Silvan Sievers.
    Merge-and-Shrink Heuristics for Classical Planning: Efficient Implementation and Partial Abstractions.
    In Proceedings of the 11th Annual Symposium on Combinatorial Search (SoCS 2018), pp. 90-98. AAAI Press, 2018.

    merge_and_shrink(verbosity=normal, transform=no_transform(), cache_estimates=true, merge_strategy, shrink_strategy, label_reduction=, prune_unreachable_states=true, prune_irrelevant_states=true, max_states=-1, max_states_before_merge=-1, threshold_before_merge=-1, main_loop_max_time=infinity)

  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.

    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • merge_strategy (MergeStrategy): See detailed documentation for merge strategies. We currently recommend SCC-DFP, which can be achieved using merge_strategy=merge_sccs(order_of_sccs=topological,merge_selector=score_based_filtering(scoring_functions=[goal_relevance,dfp,total_order]))
  • shrink_strategy (ShrinkStrategy): See detailed documentation for shrink strategies. We currently recommend non-greedy shrink_bisimulation, which can be achieved using shrink_strategy=shrink_bisimulation(greedy=false)
  • label_reduction (LabelReduction): See detailed documentation for labels. There is currently only one 'option' to use label_reduction, which is label_reduction=exact Also note the interaction with shrink strategies.
  • prune_unreachable_states (bool): If true, prune abstract states unreachable from the initial state.
  • prune_irrelevant_states (bool): If true, prune abstract states from which no goal state can be reached.
  • max_states (int [-1, infinity]): maximum transition system size allowed at any time point.
  • max_states_before_merge (int [-1, infinity]): maximum transition system size allowed for two transition systems before being merged to form the synchronized product.
  • threshold_before_merge (int [-1, infinity]): If a transition system, before being merged, surpasses this soft transition system size limit, the shrink strategy is called to possibly shrink the transition system.
  • main_loop_max_time (double [0.0, infinity]): A limit in seconds on the runtime of the main loop of the algorithm. If the limit is exceeded, the algorithm terminates, potentially returning a factored transition system with several factors. Also note that the time limit is only checked between transformations of the main loop, but not during, so it can be exceeded if a transformation is runtime-intense.

Note: Conditional effects are supported directly. Note, however, that for tasks that are not factored (in the sense of the JACM 2014 merge-and-shrink paper), the atomic transition systems on which merge-and-shrink heuristics are based are nondeterministic, which can lead to poor heuristics even when only perfect shrinking is performed.

Note: When pruning unreachable states, admissibility and consistency is only guaranteed for reachable states and transitions between reachable states. While this does not impact regular A* search which will never encounter any unreachable state, it impacts techniques like symmetry-based pruning: a reachable state which is mapped to an unreachable symmetric state (which hence is pruned) would falsely be considered a dead-end and also be pruned, thus violating optimality of the search.

Note: When using a time limit on the main loop of the merge-and-shrink algorithm, the heuristic will compute the maximum over all heuristics induced by the remaining factors if terminating the merge-and-shrink algorithm early. Exception: if there is an unsolvable factor, it will be used as the exclusive heuristic since the problem is unsolvable.

Note: A currently recommended good configuration uses bisimulation based shrinking, the merge strategy SCC-DFP, and the appropriate label reduction setting (max_states has been altered to be between 10k and 200k in the literature). As merge-and-shrink heuristics can be expensive to compute, we also recommend limiting time by setting main_loop_max_time to a finite value. A sensible value would be half of the time allocated for the planner.

merge_and_shrink(shrink_strategy=shrink_bisimulation(greedy=false),merge_strategy=merge_sccs(order_of_sccs=topological,merge_selector=score_based_filtering(scoring_functions=[goal_relevance(),dfp(),total_order()])),label_reduction=exact(before_shrinking=true,before_merging=false),max_states=50k,threshold_before_merge=1)

Supported language features:

  • action costs: supported
  • conditional effects: supported (but see note)
  • axioms: not supported

Properties:

  • admissible: yes (but see note)
  • consistent: yes (but see note)
  • safe: yes
  • preferred operators: no

Operator-counting heuristic#

An operator-counting heuristic computes a linear program (LP) in each state. The LP has one variable Count_o for each operator o that represents how often the operator is used in a plan. Operator-counting constraints are linear constraints over these varaibles that are guaranteed to have a solution with Count_o = occurrences(o, pi) for every plan pi. Minimizing the total cost of operators subject to some operator-counting constraints is an admissible heuristic. For details, see

  • Florian Pommerening, Gabriele Roeger, Malte Helmert and Blai Bonet.
    LP-based Heuristics for Cost-optimal Planning.
    In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 226-234. AAAI Press, 2014.

    operatorcounting(constraint_generators, use_integer_operator_counts=false, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true)

  • constraint_generators (list of ConstraintGenerator): methods that generate constraints over operator-counting variables

  • use_integer_operator_counts (bool): restrict operator-counting variables to integer values. Computing the heuristic with integer variables can produce higher values but requires solving a MIP instead of an LP which is generally more computationally expensive. Turning this option on can thus drastically increase the runtime.
  • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
    • cplex: commercial solver by IBM
    • soplex: open source solver by ZIB
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Note: to use an LP solver, you must build the planner with LP support. See build instructions.

Supported language features:

  • action costs: supported
  • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented constraint generators do)
  • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented constraint generators do)

Properties:

  • admissible: yes
  • consistent: yes, if all constraint generators represent consistent heuristics
  • safe: yes
  • preferred operators: no

Basic Evaluators#

Constant evaluator#

Returns a constant value.

const(value=1, verbosity=normal)
  • value (int [0, infinity]): the constant value
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output

g-value evaluator#

Returns the g-value (path cost) of the search node.

g(verbosity=normal)
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output

Max evaluator#

Calculates the maximum of the sub-evaluators.

max(evals, verbosity=normal)
  • evals (list of Evaluator): at least one evaluator
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output

Preference evaluator#

Returns 0 if preferred is true and 1 otherwise.

pref(verbosity=normal)
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output

Sum evaluator#

Calculates the sum of the sub-evaluators.

sum(evals, verbosity=normal)
  • evals (list of Evaluator): at least one evaluator
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output

Weighted evaluator#

Multiplies the value of the evaluator with the given weight.

weight(eval, weight, verbosity=normal)
  • eval (Evaluator): evaluator
  • weight (int): weight
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output

Cost Partitioning Heuristics#

Canonical heuristic over abstractions#

Shuffle abstractions randomly.

canonical_heuristic(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true)
  • abstractions (list of AbstractionGenerator): abstraction generators
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
  • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Greedy zero-one cost partitioning#

gzocp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1)
  • abstractions (list of AbstractionGenerator): abstraction generators
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • orders (OrderGenerator): order generator
  • max_orders (int [0, infinity]): maximum number of orders
  • max_size (int [0, infinity]): maximum heuristic size in KiB
  • max_time (double [0, infinity]): maximum time in seconds for finding orders
  • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
  • samples (int [1, infinity]): number of samples for diversification
  • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

Supported language features:

  • action costs: supported
  • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
  • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Maximum over abstractions#

Maximize over a set of abstraction heuristics.

maximize(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true)
  • abstractions (list of AbstractionGenerator): abstraction generators
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
  • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Optimal cost partitioning heuristic#

Compute an optimal cost partitioning for each evaluated state.

ocp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, lpsolver=cplex, allow_negative_costs=true)
  • abstractions (list of AbstractionGenerator): abstraction generators
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
    • cplex: commercial solver by IBM
    • soplex: open source solver by ZIB
  • allow_negative_costs (bool): use general instead of non-negative cost partitioning

Note: to use an LP solver, you must build the planner with LP support. See build instructions.

Supported language features:

  • action costs: supported
  • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
  • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Post-hoc optimization heuristic#

Compute the maximum over multiple PhO heuristics precomputed offline.

pho(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, saturated=true, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1, lpsolver=cplex)
  • abstractions (list of AbstractionGenerator): abstraction generators
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • saturated (bool): saturate costs
  • orders (OrderGenerator): order generator
  • max_orders (int [0, infinity]): maximum number of orders
  • max_size (int [0, infinity]): maximum heuristic size in KiB
  • max_time (double [0, infinity]): maximum time in seconds for finding orders
  • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
  • samples (int [1, infinity]): number of samples for diversification
  • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
  • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
    • cplex: commercial solver by IBM
    • soplex: open source solver by ZIB

Note: to use an LP solver, you must build the planner with LP support. See build instructions.

Supported language features:

  • action costs: supported
  • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
  • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Saturated cost partitioning#

Compute the maximum over multiple saturated cost partitioning heuristics using different orders. For details, see

  • Jendrik Seipp, Thomas Keller and Malte Helmert.
    Saturated Cost Partitioning for Optimal Classical Planning.
    Journal of Artificial Intelligence Research 67:129-167. 2020.

    scp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, saturator=all, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1)

  • abstractions (list of AbstractionGenerator): abstraction generators

  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • saturator ({all, perim, perimstar}): function that computes saturated cost functions
    • all: preserve estimates of all states
    • perim: preserve estimates of states in perimeter around goal
    • perimstar: compute 'perim' first and then 'all' with remaining costs
  • orders (OrderGenerator): order generator
  • max_orders (int [0, infinity]): maximum number of orders
  • max_size (int [0, infinity]): maximum heuristic size in KiB
  • max_time (double [0, infinity]): maximum time in seconds for finding orders
  • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
  • samples (int [1, infinity]): number of samples for diversification
  • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

Difference to cegar(): The cegar() plugin computes a single saturated cost partitioning over Cartesian abstraction heuristics. In contrast, saturated_cost_partitioning() supports computing the maximum over multiple saturated cost partitionings using different heuristic orders, and it supports both Cartesian abstraction heuristics and pattern database heuristics. While cegar() interleaves abstraction computation with cost partitioning, saturated_cost_partitioning() computes all abstractions using the original costs.

Supported language features:

  • action costs: supported
  • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
  • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Online saturated cost partitioning#

Compute the maximum over multiple saturated cost partitioning heuristics diversified during the search. For details, see

  • Jendrik Seipp.
    Online Saturated Cost Partitioning for Classical Planning.
    In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 317-321. AAAI Press, 2021.

    scp_online(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, saturator=all, orders=greedy_orders(), max_size=infinity, max_time=200, interval=10K, debug=false, random_seed=-1)

  • abstractions (list of AbstractionGenerator): abstraction generators

  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • saturator ({all, perim, perimstar}): function that computes saturated cost functions
    • all: preserve estimates of all states
    • perim: preserve estimates of states in perimeter around goal
    • perimstar: compute 'perim' first and then 'all' with remaining costs
  • orders (OrderGenerator): order generator
  • max_size (int [0, infinity]): maximum (estimated) heuristic size in KiB
  • max_time (double [0, infinity]): maximum time in seconds for finding orders
  • interval (int [1, infinity]): select every i-th evaluated state for online diversification
  • debug (bool): print debug output
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

Supported language features:

  • action costs: supported
  • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
  • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

Properties:

  • admissible: yes
  • consistent: no
  • safe: yes
  • preferred operators: no

(Opportunistic) uniform cost partitioning#

  • Jendrik Seipp, Thomas Keller and Malte Helmert.
    A Comparison of Cost Partitioning Algorithms for Optimal Classical Planning.
    In Proceedings of the Twenty-Seventh International Conference on Automated Planning and Scheduling (ICAPS 2017), pp. 259-268. AAAI Press, 2017.

    ucp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1, opportunistic=false, debug=false)

  • abstractions (list of AbstractionGenerator): abstraction generators

  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • orders (OrderGenerator): order generator
  • max_orders (int [0, infinity]): maximum number of orders
  • max_size (int [0, infinity]): maximum heuristic size in KiB
  • max_time (double [0, infinity]): maximum time in seconds for finding orders
  • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
  • samples (int [1, infinity]): number of samples for diversification
  • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
  • opportunistic (bool): recalculate uniform cost partitioning after each considered abstraction
  • debug (bool): print debugging messages

Supported language features:

  • action costs: supported
  • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
  • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Pattern Database Heuristics#

Canonical PDB#

The canonical pattern database heuristic is calculated as follows. For a given pattern collection C, the value of the canonical heuristic function is the maximum over all maximal additive subsets A in C, where the value for one subset S in A is the sum of the heuristic values for all patterns in S for a given state.

cpdbs(patterns=systematic(1), max_time_dominance_pruning=infinity, verbosity=normal, transform=no_transform(), cache_estimates=true)
  • patterns (PatternCollectionGenerator): pattern generation method
  • max_time_dominance_pruning (double [0.0, infinity]): The maximum time in seconds spent on dominance pruning. Using 0.0 turns off dominance pruning. Dominance pruning excludes patterns and additive subsets that will never contribute to the heuristic value because there are dominating subsets in the collection.
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

iPDB#

This approach is a combination of using the Canonical PDB heuristic over patterns computed with the hillclimbing algorithm for pattern generation. It is a short-hand for the command-line option cpdbs(hillclimbing()). Both the heuristic and the pattern generation algorithm are described in the following paper:

For implementation notes, see:

See also Canonical PDB and Hill climbing for more details.

ipdb(pdb_max_size=2000000, collection_max_size=20000000, num_samples=1000, min_improvement=10, max_time=infinity, max_generated_patterns=infinity, random_seed=-1, max_time_dominance_pruning=infinity, verbosity=normal, transform=no_transform(), cache_estimates=true)
  • pdb_max_size (int [1, infinity]): maximal number of states per pattern database
  • collection_max_size (int [1, infinity]): maximal number of states in the pattern collection
  • num_samples (int [1, infinity]): number of samples (random states) on which to evaluate each candidate pattern collection
  • min_improvement (int [1, infinity]): minimum number of samples on which a candidate pattern collection must improve on the current one to be considered as the next pattern collection
  • max_time (double [0.0, infinity]): maximum time in seconds for improving the initial pattern collection via hill climbing. If set to 0, no hill climbing is performed at all. Note that this limit only affects hill climbing. Use max_time_dominance_pruning to limit the time spent for pruning dominated patterns.
  • max_generated_patterns (int [0, infinity]): maximum number of generated patterns
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
  • max_time_dominance_pruning (double [0.0, infinity]): The maximum time in seconds spent on dominance pruning. Using 0.0 turns off dominance pruning. Dominance pruning excludes patterns and additive subsets that will never contribute to the heuristic value because there are dominating subsets in the collection.
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Note: The pattern collection created by the algorithm will always contain all patterns consisting of a single goal variable, even if this violates the pdb_max_size or collection_max_size limits.

Note: This pattern generation method generates patterns optimized for use with the canonical pattern database heuristic.

Implementation Notes#

The following will very briefly describe the algorithm and explain the differences between the original implementation from 2007 and the new one in Fast Downward.

The aim of the algorithm is to output a pattern collection for which the Canonical PDB yields the best heuristic estimates.

The algorithm is basically a local search (hill climbing) which searches the "pattern neighbourhood" (starting initially with a pattern for each goal variable) for improving the pattern collection. This is done as described in the section "pattern construction as search" in the paper, except for the corrected search neighbourhood discussed below. For evaluating the neighbourhood, the "counting approximation" as introduced in the paper was implemented. An important difference however consists in the fact that this implementation computes all pattern databases for each candidate pattern rather than using A* search to compute the heuristic values only for the sample states for each pattern.

Also the logic for sampling the search space differs a bit from the original implementation. The original implementation uses a random walk of a length which is binomially distributed with the mean at the estimated solution depth (estimation is done with the current pattern collection heuristic). In the Fast Downward implementation, also a random walk is used, where the length is the estimation of the number of solution steps, which is calculated by dividing the current heuristic estimate for the initial state by the average operator costs of the planning task (calculated only once and not updated during sampling!) to take non-unit cost problems into account. This yields a random walk of an expected lenght of np = 2 * estimated number of solution steps. If the random walk gets stuck, it is being restarted from the initial state, exactly as described in the original paper.

The section "avoiding redundant evaluations" describes how the search neighbourhood of patterns can be restricted to variables that are relevant to the variables already included in the pattern by analyzing causal graphs. There is a mistake in the paper that leads to some relevant neighbouring patterns being ignored. See the errata for details. This mistake has been addressed in this implementation. The second approach described in the paper (statistical confidence interval) is not applicable to this implementation, as it doesn't use A* search but constructs the entire pattern databases for all candidate patterns anyway. The search is ended if there is no more improvement (or the improvement is smaller than the minimal improvement which can be set as an option), however there is no limit of iterations of the local search. This is similar to the techniques used in the original implementation as described in the paper.

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Pattern database heuristic#

TODO

pdb(pattern=greedy(), verbosity=normal, transform=no_transform(), cache_estimates=true)
  • pattern (PatternGenerator): pattern generation method
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Zero-One PDB#

The zero/one pattern database heuristic is simply the sum of the heuristic values of all patterns in the pattern collection. In contrast to the canonical pattern database heuristic, there is no need to check for additive subsets, because the additivity of the patterns is guaranteed by action cost partitioning. This heuristic uses the most simple form of action cost partitioning, i.e. if an operator affects more than one pattern in the collection, its costs are entirely taken into account for one pattern (the first one which it affects) and set to zero for all other affected patterns.

zopdbs(patterns=systematic(1), verbosity=normal, transform=no_transform(), cache_estimates=true)
  • patterns (PatternCollectionGenerator): pattern generation method
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Potential Heuristics#

Potential heuristic optimized for all states#

The algorithm is based on

  • Jendrik Seipp, Florian Pommerening and Malte Helmert.
    New Optimization Functions for Potential Heuristics.
    In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

    all_states_potential(max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true)

  • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.

  • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
    • cplex: commercial solver by IBM
    • soplex: open source solver by ZIB
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Note: to use an LP solver, you must build the planner with LP support. See build instructions.

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Diverse potential heuristics#

The algorithm is based on

  • Jendrik Seipp, Florian Pommerening and Malte Helmert.
    New Optimization Functions for Potential Heuristics.
    In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

    diverse_potentials(num_samples=1000, max_num_heuristics=infinity, max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true, random_seed=-1)

  • num_samples (int [0, infinity]): Number of states to sample

  • max_num_heuristics (int [0, infinity]): maximum number of potential heuristics
  • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.
  • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
    • cplex: commercial solver by IBM
    • soplex: open source solver by ZIB
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

Note: to use an LP solver, you must build the planner with LP support. See build instructions.

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Potential heuristic optimized for initial state#

The algorithm is based on

  • Jendrik Seipp, Florian Pommerening and Malte Helmert.
    New Optimization Functions for Potential Heuristics.
    In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

    initial_state_potential(max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true)

  • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.

  • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
    • cplex: commercial solver by IBM
    • soplex: open source solver by ZIB
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Note: to use an LP solver, you must build the planner with LP support. See build instructions.

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no

Sample-based potential heuristics#

Maximum over multiple potential heuristics optimized for samples. The algorithm is based on

  • Jendrik Seipp, Florian Pommerening and Malte Helmert.
    New Optimization Functions for Potential Heuristics.
    In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

    sample_based_potentials(num_heuristics=1, num_samples=1000, max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true, random_seed=-1)

  • num_heuristics (int [0, infinity]): number of potential heuristics

  • num_samples (int [0, infinity]): Number of states to sample
  • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.
  • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
    • cplex: commercial solver by IBM
    • soplex: open source solver by ZIB
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

Note: to use an LP solver, you must build the planner with LP support. See build instructions.

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no