generator : Generator
Some Generator to prepare partitions for cross-validation.
It must not change “targets”, thus e.g. no AttributePermutator’s
errorfx : func, optional
Functor that computes a scalar error value from the vectors of
desired and predicted values (e.g. subclass of ErrorFunction).
indexsum : (‘sparse’, ‘fancy’), optional
What use to compute sums over arbitrary columns. ‘fancy’
corresponds to regular fancy indexing over columns, whenever
in ‘sparse’, product of sparse matrices is used (usually
faster, so is default if scipy is available).
reuse_neighbors : bool, optional
Compute neighbors information only once, thus allowing for
efficient reuse on subsequent calls where dataset’s feature
attributes remain the same (e.g. during permutation testing)
splitter : Splitter, optional
Which will be used to split partitioned datasets. If None specified
then standard one operating on partitions will be used
enable_ca : None or list of str
Names of the conditional attributes which should be enabled in addition
to the default ones
disable_ca : None or list of str
Names of the conditional attributes which should be disabled
queryengine : QueryEngine
Engine to use to discover the “neighborhood” of each feature.
See QueryEngine.
roi_ids : None or list(int) or str
List of feature ids (not coordinates) the shall serve as ROI seeds
(e.g. sphere centers). Alternatively, this can be the name of a
feature attribute of the input dataset, whose non-zero values
determine the feature ids. By default all features will be used.
null_dist : instance of distribution estimator
The estimated distribution is used to assign a probability for a
certain value of the computed measure.
auto_train : bool
Flag whether the learner will automatically train itself on the input
dataset when called untrained.
force_train : bool
Flag whether the learner will enforce training on the input dataset
upon every call.
space : str, optional
Name of the ‘processing space’. The actual meaning of this argument
heavily depends on the sub-class implementation. In general, this is
a trigger that tells the node to compute and store information about
the input data that is “interesting” in the context of the
corresponding processing in the output dataset.
pass_attr : str, list of str|tuple, optional
Additional attributes to pass on to an output dataset. Attributes can
be taken from all three attribute collections of an input dataset
(sa, fa, a – see Dataset.get_attr()), or from the collection
of conditional attributes (ca) of a node instance. Corresponding
collection name prefixes should be used to identify attributes, e.g.
‘ca.null_prob’ for the conditional attribute ‘null_prob’, or
‘fa.stats’ for the feature attribute stats. In addition to a plain
attribute identifier it is possible to use a tuple to trigger more
complex operations. The first tuple element is the attribute
identifier, as described before. The second element is the name of the
target attribute collection (sa, fa, or a). The third element is the
axis number of a multidimensional array that shall be swapped with the
current first axis. The fourth element is a new name that shall be
used for an attribute in the output dataset.
Example: (‘ca.null_prob’, ‘fa’, 1, ‘pvalues’) will take the
conditional attribute ‘null_prob’ and store it as a feature attribute
‘pvalues’, while swapping the first and second axes. Simplified
instructions can be given by leaving out consecutive tuple elements
starting from the end.
postproc : Node instance, optional
Node to perform post-processing of results. This node is applied
in __call__() to perform a final processing step on the to be
result dataset. If None, nothing is done.
descr : str
Description of the instance
|