Libraries/Microsoft.ML.xml
<?xml version="1.0"?>
<doc> <assembly> <name>Microsoft.ML</name> </assembly> <members> <member name="T:Microsoft.ML.Runtime.Experiment"> <summary> This class represents an entry point graph. The nodes in the graph represent entry point calls and the edges of the graph are variables that help connect the nodes. </summary> </member> <member name="M:Microsoft.ML.Runtime.Experiment.Compile"> <summary> Parses the nodes to determine the validity of the graph and to determine the inputs and outputs of the graph. </summary> </member> <member name="T:Microsoft.ML.Runtime.FixedPlattCalibratorCalibratorTrainer"> <summary> </summary> </member> <member name="P:Microsoft.ML.Runtime.FixedPlattCalibratorCalibratorTrainer.Slope"> <summary> The slope parameter of f(x) = 1 / (1 + exp(-slope * x + offset) </summary> </member> <member name="P:Microsoft.ML.Runtime.FixedPlattCalibratorCalibratorTrainer.Offset"> <summary> The offset parameter of f(x) = 1 / (1 + exp(-slope * x + offset) </summary> </member> <member name="T:Microsoft.ML.Runtime.NaiveCalibratorCalibratorTrainer"> <summary> </summary> </member> <member name="T:Microsoft.ML.Runtime.PavCalibratorCalibratorTrainer"> <summary> </summary> </member> <member name="T:Microsoft.ML.Runtime.PlattCalibratorCalibratorTrainer"> <summary> Platt calibration. </summary> </member> <member name="T:Microsoft.ML.Runtime.ExpLossClassificationLossFunction"> <summary> Exponential loss. </summary> </member> <member name="P:Microsoft.ML.Runtime.ExpLossClassificationLossFunction.Beta"> <summary> Beta (dilation) </summary> </member> <member name="T:Microsoft.ML.Runtime.HingeLossClassificationLossFunction"> <summary> Hinge loss. </summary> </member> <member name="P:Microsoft.ML.Runtime.HingeLossClassificationLossFunction.Margin"> <summary> Margin value </summary> </member> <member name="T:Microsoft.ML.Runtime.LogLossClassificationLossFunction"> <summary> Log loss. </summary> </member> <member name="T:Microsoft.ML.Runtime.SmoothedHingeLossClassificationLossFunction"> <summary> Smoothed Hinge loss. </summary> </member> <member name="P:Microsoft.ML.Runtime.SmoothedHingeLossClassificationLossFunction.SmoothingConst"> <summary> Smoothing constant </summary> </member> <member name="T:Microsoft.ML.Runtime.GLEarlyStoppingCriterion"> <summary> Stop in case of loss of generality. </summary> </member> <member name="P:Microsoft.ML.Runtime.GLEarlyStoppingCriterion.Threshold"> <summary> Threshold in range [0,1]. </summary> </member> <member name="T:Microsoft.ML.Runtime.LPEarlyStoppingCriterion"> <summary> Stops in case of low progress. </summary> </member> <member name="P:Microsoft.ML.Runtime.LPEarlyStoppingCriterion.Threshold"> <summary> Threshold in range [0,1]. </summary> </member> <member name="P:Microsoft.ML.Runtime.LPEarlyStoppingCriterion.WindowSize"> <summary> The window size. </summary> </member> <member name="T:Microsoft.ML.Runtime.PQEarlyStoppingCriterion"> <summary> Stops in case of generality to progress ration exceeds threshold. </summary> </member> <member name="P:Microsoft.ML.Runtime.PQEarlyStoppingCriterion.Threshold"> <summary> Threshold in range [0,1]. </summary> </member> <member name="P:Microsoft.ML.Runtime.PQEarlyStoppingCriterion.WindowSize"> <summary> The window size. </summary> </member> <member name="T:Microsoft.ML.Runtime.TREarlyStoppingCriterion"> <summary> Stop if validation score exceeds threshold value. </summary> </member> <member name="P:Microsoft.ML.Runtime.TREarlyStoppingCriterion.Threshold"> <summary> Tolerance threshold. (Non negative value) </summary> </member> <member name="T:Microsoft.ML.Runtime.UPEarlyStoppingCriterion"> <summary> Stops in case of consecutive loss in generality. </summary> </member> <member name="P:Microsoft.ML.Runtime.UPEarlyStoppingCriterion.WindowSize"> <summary> The window size. </summary> </member> <member name="T:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer"> <summary> Uses a logit-boost boosted tree learner to perform binary classification. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.UnbalancedSets"> <summary> Should we use derivatives optimized for unbalanced sets </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.BestStepRankingRegressionTrees"> <summary> Use best regression step trees? </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.UseLineSearch"> <summary> Should we use line search for a step size </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.NumPostBracketSteps"> <summary> Number of post-bracket line search steps </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.MinStepSize"> <summary> Minimum line search step size </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.OptimizationAlgorithm"> <summary> Optimization algorithm to be used (GradientDescent, AcceleratedGradientDescent) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.EarlyStoppingRule"> <summary> Early stopping rule. (Validation set (/valid) is required.) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.EarlyStoppingMetrics"> <summary> Early stopping metrics. (For regression, 1: L1, 2:L2; for ranking, 1:NDCG@1, 3:NDCG@3) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.EnablePruning"> <summary> Enable post-training pruning to avoid overfitting. (a validation set is required) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.UseTolerantPruning"> <summary> Use window and tolerance for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.PruningThreshold"> <summary> The tolerance threshold for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.PruningWindowSize"> <summary> The moving window size for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.LearningRates"> <summary> The learning rate </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.Shrinkage"> <summary> Shrinkage </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.DropoutRate"> <summary> Dropout rate for tree regularization </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.GetDerivativesSampleRate"> <summary> Sample each query 1 in k times in the GetDerivatives function </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.WriteLastEnsemble"> <summary> Write the last ensemble instead of the one determined by early stopping </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.MaxTreeOutput"> <summary> Upper bound on absolute value of single tree output </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.RandomStart"> <summary> Training starts from random ordering (determined by /r1) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.FilterZeroLambdas"> <summary> Filter zero lambdas during training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.BaselineScoresFormula"> <summary> Freeform defining the scores that should be used as the baseline ranker </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.BaselineAlphaRisk"> <summary> Baseline alpha for tradeoffs of risk (0 is normal training) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.PositionDiscountFreeform"> <summary> The discount freeform which specifies the per position discounts of documents in a query (uses a single variable P for position where P=0 is first position) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.ParallelTrainer"> <summary> Allows to choose Parallel FastTree Learning Algorithm </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.FeatureSelectSeed"> <summary> The seed of the active feature selection </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.HistogramPoolSize"> <summary> The number of histograms in the pool (between 2 and numLeaves) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.CategoricalSplit"> <summary> Whether to do split based on multiple categorical feature values. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.MaxCategoricalGroupsPerNode"> <summary> Maximum categorical split groups to consider when splitting on a categorical feature. Split groups are a collection of split points. This is used to reduce overfitting when there many categorical features. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.MaxCategoricalSplitPoints"> <summary> Maximum categorical split points to consider when splitting on a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.MinDocsPercentageForCategoricalSplit"> <summary> Minimum categorical docs percentage in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.MinDocsForCategoricalSplit"> <summary> Minimum categorical doc count in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.Bias"> <summary> Bias for calculating gradient for each feature bin for a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.Bundling"> <summary> Bundle low population bins. Bundle.None(0): no bundling, Bundle.AggregateLowPopulation(1): Bundle low population, Bundle.Adjacent(2): Neighbor low population bundle. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.SparsifyThreshold"> <summary> Sparsity level needed to use sparse feature representation </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.FeatureFirstUsePenalty"> <summary> The feature first use penalty coefficient </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.FeatureReusePenalty"> <summary> The feature re-use penalty (regularization) coefficient </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.SoftmaxTemperature"> <summary> The temperature of the randomized softmax distribution for choosing the feature </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.ExecutionTimes"> <summary> Print execution time breakdown to stdout </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.NumLeaves"> <summary> The max number of leaves in each regression tree </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.MinDocumentsInLeafs"> <summary> The minimal number of documents allowed in a leaf of a regression tree, out of the subsampled data </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.NumTrees"> <summary> Number of weak hypotheses in the ensemble </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.FeatureFraction"> <summary> The fraction of features (chosen randomly) to use on each iteration </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.BaggingSize"> <summary> Number of trees in each bag (0 for disabling bagging) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.BaggingTrainFraction"> <summary> Percentage of training examples used in each bag </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.SplitFraction"> <summary> The fraction of features (chosen randomly) to use on each split </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.Smoothing"> <summary> Smoothing paramter for tree regularization </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.AllowEmptyTrees"> <summary> When a root split is impossible, allow training to proceed </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.FeatureCompressionLevel"> <summary> The level of feature compression to use </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.CompressEnsemble"> <summary> Compress the tree Ensemble </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.MaxTreesAfterCompression"> <summary> Maximum Number of trees after compression </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.PrintTestGraph"> <summary> Print metrics graph for the first test set </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.PrintTrainValidGraph"> <summary> Print Train and Validation metrics in graph </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.TestFrequency"> <summary> Calculate metric values for train/valid/test every k rounds </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.GroupIdColumn"> <summary> Column to use for example groupId </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeBinaryClassificationFastTreeTrainer.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="T:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer"> <summary> Trains gradient boosted decision trees to the LambdaRank quasi-gradient. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.CustomGains"> <summary> Comma seperated list of gains associated to each relevance label. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.TrainDcg"> <summary> Train DCG instead of NDCG </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.SortingAlgorithm"> <summary> The sorting algorithm to use for DCG and LambdaMart calculations [DescendingStablePessimistic/DescendingStable/DescendingReverse/DescendingDotNet] </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.LambdaMartMaxTruncation"> <summary> max-NDCG truncation to use in the Lambda Mart algorithm </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.ShiftedNdcg"> <summary> Use shifted NDCG </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.CostFunctionParam"> <summary> Cost function parameter (w/c) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.DistanceWeight2"> <summary> Distance weight 2 adjustment to cost </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.NormalizeQueryLambdas"> <summary> Normalize query lambdas </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.BestStepRankingRegressionTrees"> <summary> Use best regression step trees? </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.UseLineSearch"> <summary> Should we use line search for a step size </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.NumPostBracketSteps"> <summary> Number of post-bracket line search steps </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.MinStepSize"> <summary> Minimum line search step size </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.OptimizationAlgorithm"> <summary> Optimization algorithm to be used (GradientDescent, AcceleratedGradientDescent) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.EarlyStoppingRule"> <summary> Early stopping rule. (Validation set (/valid) is required.) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.EarlyStoppingMetrics"> <summary> Early stopping metrics. (For regression, 1: L1, 2:L2; for ranking, 1:NDCG@1, 3:NDCG@3) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.EnablePruning"> <summary> Enable post-training pruning to avoid overfitting. (a validation set is required) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.UseTolerantPruning"> <summary> Use window and tolerance for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.PruningThreshold"> <summary> The tolerance threshold for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.PruningWindowSize"> <summary> The moving window size for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.LearningRates"> <summary> The learning rate </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.Shrinkage"> <summary> Shrinkage </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.DropoutRate"> <summary> Dropout rate for tree regularization </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.GetDerivativesSampleRate"> <summary> Sample each query 1 in k times in the GetDerivatives function </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.WriteLastEnsemble"> <summary> Write the last ensemble instead of the one determined by early stopping </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.MaxTreeOutput"> <summary> Upper bound on absolute value of single tree output </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.RandomStart"> <summary> Training starts from random ordering (determined by /r1) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.FilterZeroLambdas"> <summary> Filter zero lambdas during training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.BaselineScoresFormula"> <summary> Freeform defining the scores that should be used as the baseline ranker </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.BaselineAlphaRisk"> <summary> Baseline alpha for tradeoffs of risk (0 is normal training) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.PositionDiscountFreeform"> <summary> The discount freeform which specifies the per position discounts of documents in a query (uses a single variable P for position where P=0 is first position) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.ParallelTrainer"> <summary> Allows to choose Parallel FastTree Learning Algorithm </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.FeatureSelectSeed"> <summary> The seed of the active feature selection </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.HistogramPoolSize"> <summary> The number of histograms in the pool (between 2 and numLeaves) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.CategoricalSplit"> <summary> Whether to do split based on multiple categorical feature values. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.MaxCategoricalGroupsPerNode"> <summary> Maximum categorical split groups to consider when splitting on a categorical feature. Split groups are a collection of split points. This is used to reduce overfitting when there many categorical features. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.MaxCategoricalSplitPoints"> <summary> Maximum categorical split points to consider when splitting on a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.MinDocsPercentageForCategoricalSplit"> <summary> Minimum categorical docs percentage in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.MinDocsForCategoricalSplit"> <summary> Minimum categorical doc count in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.Bias"> <summary> Bias for calculating gradient for each feature bin for a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.Bundling"> <summary> Bundle low population bins. Bundle.None(0): no bundling, Bundle.AggregateLowPopulation(1): Bundle low population, Bundle.Adjacent(2): Neighbor low population bundle. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.SparsifyThreshold"> <summary> Sparsity level needed to use sparse feature representation </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.FeatureFirstUsePenalty"> <summary> The feature first use penalty coefficient </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.FeatureReusePenalty"> <summary> The feature re-use penalty (regularization) coefficient </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.SoftmaxTemperature"> <summary> The temperature of the randomized softmax distribution for choosing the feature </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.ExecutionTimes"> <summary> Print execution time breakdown to stdout </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.NumLeaves"> <summary> The max number of leaves in each regression tree </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.MinDocumentsInLeafs"> <summary> The minimal number of documents allowed in a leaf of a regression tree, out of the subsampled data </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.NumTrees"> <summary> Number of weak hypotheses in the ensemble </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.FeatureFraction"> <summary> The fraction of features (chosen randomly) to use on each iteration </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.BaggingSize"> <summary> Number of trees in each bag (0 for disabling bagging) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.BaggingTrainFraction"> <summary> Percentage of training examples used in each bag </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.SplitFraction"> <summary> The fraction of features (chosen randomly) to use on each split </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.Smoothing"> <summary> Smoothing paramter for tree regularization </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.AllowEmptyTrees"> <summary> When a root split is impossible, allow training to proceed </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.FeatureCompressionLevel"> <summary> The level of feature compression to use </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.CompressEnsemble"> <summary> Compress the tree Ensemble </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.MaxTreesAfterCompression"> <summary> Maximum Number of trees after compression </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.PrintTestGraph"> <summary> Print metrics graph for the first test set </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.PrintTrainValidGraph"> <summary> Print Train and Validation metrics in graph </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.TestFrequency"> <summary> Calculate metric values for train/valid/test every k rounds </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.GroupIdColumn"> <summary> Column to use for example groupId </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRankingFastTreeTrainer.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="T:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer"> <summary> Trains gradient boosted decision trees to fit target values using least-squares. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.BestStepRankingRegressionTrees"> <summary> Use best regression step trees? </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.UseLineSearch"> <summary> Should we use line search for a step size </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.NumPostBracketSteps"> <summary> Number of post-bracket line search steps </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.MinStepSize"> <summary> Minimum line search step size </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.OptimizationAlgorithm"> <summary> Optimization algorithm to be used (GradientDescent, AcceleratedGradientDescent) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.EarlyStoppingRule"> <summary> Early stopping rule. (Validation set (/valid) is required.) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.EarlyStoppingMetrics"> <summary> Early stopping metrics. (For regression, 1: L1, 2:L2; for ranking, 1:NDCG@1, 3:NDCG@3) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.EnablePruning"> <summary> Enable post-training pruning to avoid overfitting. (a validation set is required) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.UseTolerantPruning"> <summary> Use window and tolerance for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.PruningThreshold"> <summary> The tolerance threshold for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.PruningWindowSize"> <summary> The moving window size for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.LearningRates"> <summary> The learning rate </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.Shrinkage"> <summary> Shrinkage </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.DropoutRate"> <summary> Dropout rate for tree regularization </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.GetDerivativesSampleRate"> <summary> Sample each query 1 in k times in the GetDerivatives function </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.WriteLastEnsemble"> <summary> Write the last ensemble instead of the one determined by early stopping </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.MaxTreeOutput"> <summary> Upper bound on absolute value of single tree output </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.RandomStart"> <summary> Training starts from random ordering (determined by /r1) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.FilterZeroLambdas"> <summary> Filter zero lambdas during training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.BaselineScoresFormula"> <summary> Freeform defining the scores that should be used as the baseline ranker </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.BaselineAlphaRisk"> <summary> Baseline alpha for tradeoffs of risk (0 is normal training) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.PositionDiscountFreeform"> <summary> The discount freeform which specifies the per position discounts of documents in a query (uses a single variable P for position where P=0 is first position) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.ParallelTrainer"> <summary> Allows to choose Parallel FastTree Learning Algorithm </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.FeatureSelectSeed"> <summary> The seed of the active feature selection </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.HistogramPoolSize"> <summary> The number of histograms in the pool (between 2 and numLeaves) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.CategoricalSplit"> <summary> Whether to do split based on multiple categorical feature values. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.MaxCategoricalGroupsPerNode"> <summary> Maximum categorical split groups to consider when splitting on a categorical feature. Split groups are a collection of split points. This is used to reduce overfitting when there many categorical features. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.MaxCategoricalSplitPoints"> <summary> Maximum categorical split points to consider when splitting on a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.MinDocsPercentageForCategoricalSplit"> <summary> Minimum categorical docs percentage in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.MinDocsForCategoricalSplit"> <summary> Minimum categorical doc count in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.Bias"> <summary> Bias for calculating gradient for each feature bin for a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.Bundling"> <summary> Bundle low population bins. Bundle.None(0): no bundling, Bundle.AggregateLowPopulation(1): Bundle low population, Bundle.Adjacent(2): Neighbor low population bundle. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.SparsifyThreshold"> <summary> Sparsity level needed to use sparse feature representation </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.FeatureFirstUsePenalty"> <summary> The feature first use penalty coefficient </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.FeatureReusePenalty"> <summary> The feature re-use penalty (regularization) coefficient </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.SoftmaxTemperature"> <summary> The temperature of the randomized softmax distribution for choosing the feature </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.ExecutionTimes"> <summary> Print execution time breakdown to stdout </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.NumLeaves"> <summary> The max number of leaves in each regression tree </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.MinDocumentsInLeafs"> <summary> The minimal number of documents allowed in a leaf of a regression tree, out of the subsampled data </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.NumTrees"> <summary> Number of weak hypotheses in the ensemble </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.FeatureFraction"> <summary> The fraction of features (chosen randomly) to use on each iteration </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.BaggingSize"> <summary> Number of trees in each bag (0 for disabling bagging) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.BaggingTrainFraction"> <summary> Percentage of training examples used in each bag </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.SplitFraction"> <summary> The fraction of features (chosen randomly) to use on each split </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.Smoothing"> <summary> Smoothing paramter for tree regularization </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.AllowEmptyTrees"> <summary> When a root split is impossible, allow training to proceed </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.FeatureCompressionLevel"> <summary> The level of feature compression to use </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.CompressEnsemble"> <summary> Compress the tree Ensemble </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.MaxTreesAfterCompression"> <summary> Maximum Number of trees after compression </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.PrintTestGraph"> <summary> Print metrics graph for the first test set </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.PrintTrainValidGraph"> <summary> Print Train and Validation metrics in graph </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.TestFrequency"> <summary> Calculate metric values for train/valid/test every k rounds </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.GroupIdColumn"> <summary> Column to use for example groupId </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeRegressionFastTreeTrainer.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="T:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer"> <summary> Trains gradient boosted decision trees to fit target values using a Tweedie loss function. This learner is a generalization of Poisson, compound Poisson, and gamma regression. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.Index"> <summary> Index parameter for the Tweedie distribution, in the range [1, 2]. 1 is Poisson loss, 2 is gamma loss, and intermediate values are compound Poisson loss. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.BestStepRankingRegressionTrees"> <summary> Use best regression step trees? </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.UseLineSearch"> <summary> Should we use line search for a step size </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.NumPostBracketSteps"> <summary> Number of post-bracket line search steps </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.MinStepSize"> <summary> Minimum line search step size </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.OptimizationAlgorithm"> <summary> Optimization algorithm to be used (GradientDescent, AcceleratedGradientDescent) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.EarlyStoppingRule"> <summary> Early stopping rule. (Validation set (/valid) is required.) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.EarlyStoppingMetrics"> <summary> Early stopping metrics. (For regression, 1: L1, 2:L2; for ranking, 1:NDCG@1, 3:NDCG@3) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.EnablePruning"> <summary> Enable post-training pruning to avoid overfitting. (a validation set is required) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.UseTolerantPruning"> <summary> Use window and tolerance for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.PruningThreshold"> <summary> The tolerance threshold for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.PruningWindowSize"> <summary> The moving window size for pruning </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.LearningRates"> <summary> The learning rate </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.Shrinkage"> <summary> Shrinkage </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.DropoutRate"> <summary> Dropout rate for tree regularization </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.GetDerivativesSampleRate"> <summary> Sample each query 1 in k times in the GetDerivatives function </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.WriteLastEnsemble"> <summary> Write the last ensemble instead of the one determined by early stopping </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.MaxTreeOutput"> <summary> Upper bound on absolute value of single tree output </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.RandomStart"> <summary> Training starts from random ordering (determined by /r1) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.FilterZeroLambdas"> <summary> Filter zero lambdas during training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.BaselineScoresFormula"> <summary> Freeform defining the scores that should be used as the baseline ranker </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.BaselineAlphaRisk"> <summary> Baseline alpha for tradeoffs of risk (0 is normal training) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.PositionDiscountFreeform"> <summary> The discount freeform which specifies the per position discounts of documents in a query (uses a single variable P for position where P=0 is first position) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.ParallelTrainer"> <summary> Allows to choose Parallel FastTree Learning Algorithm </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.FeatureSelectSeed"> <summary> The seed of the active feature selection </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.HistogramPoolSize"> <summary> The number of histograms in the pool (between 2 and numLeaves) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.CategoricalSplit"> <summary> Whether to do split based on multiple categorical feature values. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.MaxCategoricalGroupsPerNode"> <summary> Maximum categorical split groups to consider when splitting on a categorical feature. Split groups are a collection of split points. This is used to reduce overfitting when there many categorical features. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.MaxCategoricalSplitPoints"> <summary> Maximum categorical split points to consider when splitting on a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.MinDocsPercentageForCategoricalSplit"> <summary> Minimum categorical docs percentage in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.MinDocsForCategoricalSplit"> <summary> Minimum categorical doc count in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.Bias"> <summary> Bias for calculating gradient for each feature bin for a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.Bundling"> <summary> Bundle low population bins. Bundle.None(0): no bundling, Bundle.AggregateLowPopulation(1): Bundle low population, Bundle.Adjacent(2): Neighbor low population bundle. </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.SparsifyThreshold"> <summary> Sparsity level needed to use sparse feature representation </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.FeatureFirstUsePenalty"> <summary> The feature first use penalty coefficient </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.FeatureReusePenalty"> <summary> The feature re-use penalty (regularization) coefficient </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.SoftmaxTemperature"> <summary> The temperature of the randomized softmax distribution for choosing the feature </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.ExecutionTimes"> <summary> Print execution time breakdown to stdout </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.NumLeaves"> <summary> The max number of leaves in each regression tree </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.MinDocumentsInLeafs"> <summary> The minimal number of documents allowed in a leaf of a regression tree, out of the subsampled data </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.NumTrees"> <summary> Number of weak hypotheses in the ensemble </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.FeatureFraction"> <summary> The fraction of features (chosen randomly) to use on each iteration </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.BaggingSize"> <summary> Number of trees in each bag (0 for disabling bagging) </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.BaggingTrainFraction"> <summary> Percentage of training examples used in each bag </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.SplitFraction"> <summary> The fraction of features (chosen randomly) to use on each split </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.Smoothing"> <summary> Smoothing paramter for tree regularization </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.AllowEmptyTrees"> <summary> When a root split is impossible, allow training to proceed </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.FeatureCompressionLevel"> <summary> The level of feature compression to use </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.CompressEnsemble"> <summary> Compress the tree Ensemble </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.MaxTreesAfterCompression"> <summary> Maximum Number of trees after compression </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.PrintTestGraph"> <summary> Print metrics graph for the first test set </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.PrintTrainValidGraph"> <summary> Print Train and Validation metrics in graph </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.TestFrequency"> <summary> Calculate metric values for train/valid/test every k rounds </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.GroupIdColumn"> <summary> Column to use for example groupId </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Runtime.FastTreeTweedieRegressionFastTreeTrainer.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="T:Microsoft.ML.Runtime.NGramNgramExtractor"> <summary> Extracts NGrams from text and convert them to vector using dictionary. </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramNgramExtractor.NgramLength"> <summary> Ngram length </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramNgramExtractor.SkipLength"> <summary> Maximum number of tokens to skip when constructing an ngram </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramNgramExtractor.AllLengths"> <summary> Whether to include all ngram lengths up to NgramLength or only NgramLength </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramNgramExtractor.MaxNumTerms"> <summary> Maximum number of ngrams to store in the dictionary </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramNgramExtractor.Weighting"> <summary> The weighting criteria </summary> </member> <member name="T:Microsoft.ML.Runtime.NGramHashNgramExtractor"> <summary> Extracts NGrams from text and convert them to vector using hashing trick. </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramHashNgramExtractor.NgramLength"> <summary> Ngram length </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramHashNgramExtractor.SkipLength"> <summary> Maximum number of tokens to skip when constructing an ngram </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramHashNgramExtractor.HashBits"> <summary> Number of bits to hash into. Must be between 1 and 30, inclusive. </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramHashNgramExtractor.Seed"> <summary> Hashing seed </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramHashNgramExtractor.Ordered"> <summary> Whether the position of each source column should be included in the hash (when there are multiple source columns). </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramHashNgramExtractor.InvertHash"> <summary> Limit the number of keys used to generate the slot name to this many. 0 means no invert hashing, -1 means no limit. </summary> </member> <member name="P:Microsoft.ML.Runtime.NGramHashNgramExtractor.AllLengths"> <summary> Whether to include all ngram lengths up to ngramLength or only ngramLength </summary> </member> <member name="T:Microsoft.ML.Runtime.SingleParallelTraining"> <summary> Single node machine learning process. </summary> </member> <member name="T:Microsoft.ML.Runtime.PoissonLossRegressionLossFunction"> <summary> Poisson loss. </summary> </member> <member name="T:Microsoft.ML.Runtime.SquaredLossRegressionLossFunction"> <summary> Squared loss. </summary> </member> <member name="T:Microsoft.ML.Runtime.TweedieLossRegressionLossFunction"> <summary> Tweedie loss. </summary> </member> <member name="P:Microsoft.ML.Runtime.TweedieLossRegressionLossFunction.Index"> <summary> Index parameter for the Tweedie distribution, in the range [1, 2]. 1 is Poisson loss, 2 is gamma loss, and intermediate values are compound Poisson loss. </summary> </member> <member name="T:Microsoft.ML.Runtime.HingeLossSDCAClassificationLossFunction"> <summary> Hinge loss. </summary> </member> <member name="P:Microsoft.ML.Runtime.HingeLossSDCAClassificationLossFunction.Margin"> <summary> Margin value </summary> </member> <member name="T:Microsoft.ML.Runtime.LogLossSDCAClassificationLossFunction"> <summary> Log loss. </summary> </member> <member name="T:Microsoft.ML.Runtime.SmoothedHingeLossSDCAClassificationLossFunction"> <summary> Smoothed Hinge loss. </summary> </member> <member name="P:Microsoft.ML.Runtime.SmoothedHingeLossSDCAClassificationLossFunction.SmoothingConst"> <summary> Smoothing constant </summary> </member> <member name="T:Microsoft.ML.Runtime.SquaredLossSDCARegressionLossFunction"> <summary> Squared loss. </summary> </member> <member name="T:Microsoft.ML.Runtime.CustomStopWordsRemover"> <summary> Remover with list of stopwords specified by the user. </summary> </member> <member name="P:Microsoft.ML.Runtime.CustomStopWordsRemover.Stopword"> <summary> List of stopwords </summary> </member> <member name="T:Microsoft.ML.Runtime.PredefinedStopWordsRemover"> <summary> Remover with predefined list of stop words. </summary> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.CodeGen.GeneratorBase.Generate(Microsoft.ML.Runtime.Internal.Utilities.IndentingTextWriter,System.String,System.String,Microsoft.ML.Runtime.ComponentCatalog.LoadableClassInfo,System.Boolean,System.String,System.String,System.String,System.String,System.String,System.String,System.String,System.String,System.Collections.Generic.HashSet{System.String},System.Collections.Generic.HashSet{System.String})"> <summary> Generate the module and its implementation. </summary> <param name="writer">The writer.</param> <param name="prefix">The module prefix.</param> <param name="regenerate">The command string used to generate.</param> <param name="component">The component.</param> <param name="generateEnums">Whether to generate enums for SubComponents.</param> <param name="moduleId"></param> <param name="moduleName"></param> <param name="moduleOwner"></param> <param name="moduleVersion"></param> <param name="moduleState"></param> <param name="moduleType"></param> <param name="moduleDeterminism"></param> <param name="moduleCategory"></param> <param name="exclude">The set of parameters to exclude</param> <param name="namespaces">The set of extra namespaces</param> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.CodeGen.GeneratorBase.GenerateEnums(Microsoft.ML.Runtime.Internal.Utilities.IndentingTextWriter,Microsoft.ML.Runtime.CommandLine.CmdParser.ArgInfo.Arg,System.Collections.Generic.HashSet{System.Tuple{System.Type,System.Type}})"> <summary> Generate enums for subcomponents. Uses ReflectionUtils to filter only the subcomponents that match the base type and the signature. </summary> <param name="w"></param> <param name="arg"></param> <param name="seen"></param> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.CodeGen.ImplGeneratorBase.GenerateFieldsOrProperties(Microsoft.ML.Runtime.Internal.Utilities.IndentingTextWriter,Microsoft.ML.Runtime.CommandLine.CmdParser.ArgInfo.Arg,System.String,System.Action{Microsoft.ML.Runtime.Internal.Utilities.IndentingTextWriter,System.String,System.String,System.String,System.Boolean,System.String})"> <summary> Generates private fields and public properties for all the fields in the arguments. Recursively generate fields and properties for subcomponents. </summary> </member> <member name="T:Microsoft.ML.Runtime.EntryPoints.CrossValidationBinaryMacro"> <summary> This macro entry point implements cross validation for binary classification. </summary> </member> <member name="T:Microsoft.ML.Runtime.EntryPoints.CrossValidationMacro"> <summary> This macro entry point implements cross validation. </summary> </member> <member name="T:Microsoft.ML.Runtime.EntryPoints.CVSplit"> <summary> The module that splits the input dataset into the specified number of cross-validation folds, and outputs the 'training' and 'testing' portion of the input for each fold. </summary> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.FeatureCombiner.PrepareFeatures(Microsoft.ML.Runtime.IHostEnvironment,Microsoft.ML.Runtime.EntryPoints.FeatureCombiner.FeatureCombinerInput)"> <summary> Given a list of feature columns, creates one "Features" column. It converts all the numeric columns to R4. For Key columns, it uses a KeyToValue+Term+KeyToVector transform chain to create one-hot vectors. The last transform is to concatenate all the resulting columns into one "Features" column. </summary> </member> <member name="T:Microsoft.ML.Runtime.EntryPoints.ImportTextData"> <summary> A component for importing text files as <see cref="T:Microsoft.ML.Runtime.Data.IDataView"/>. </summary> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.JsonUtils.ExecuteGraphCommand.SavePredictorModel(Microsoft.ML.Runtime.EntryPoints.IPredictorModel,System.String)"> <summary> Saves the PredictorModel to the given path </summary> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.JsonUtils.ExecuteGraphCommand.SaveDataView(Microsoft.ML.Runtime.Data.IDataView,System.String,System.String)"> <summary> Saves the IDV to file based on its extension </summary> </member> <member name="T:Microsoft.ML.Runtime.EntryPoints.JsonUtils.GraphRunner"> <summary> This class runs a graph of entry points with the specified inputs, and produces the specified outputs. The entry point graph is provided as a <see cref="T:Newtonsoft.Json.Linq.JArray"/> of graph nodes. The inputs need to be provided separately: the graph runner will only compile a list of required inputs, and the calling code is expected to set them prior to running the graph. REVIEW: currently, the graph is executed synchronously, one node at a time. This is an implementation choice, we probably need to consider parallel asynchronous execution, once we agree on an acceptable syntax for it. </summary> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.JsonUtils.GraphRunner.RunAll"> <summary> Run all nodes in the graph. </summary> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.JsonUtils.GraphRunner.GetOutput``1(System.String)"> <summary> Retrieve an output of the experiment graph. </summary> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.JsonUtils.GraphRunner.GetOutputOrDefault``1(System.String)"> <summary> Get the value of an EntryPointVariable present in the graph, or returns null. </summary> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.JsonUtils.GraphRunner.SetInput``1(System.String,``0)"> <summary> Set the input of the experiment graph. </summary> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.JsonUtils.GraphRunner.GetPortDataKind(System.String)"> <summary> Get the data kind of a particular port. </summary> </member> <member name="T:Microsoft.ML.Runtime.EntryPoints.JsonUtils.JsonManifestUtils"> <summary> Utilities to generate JSON manifests for entry points and other components. </summary> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.JsonUtils.JsonManifestUtils.BuildAllManifests(Microsoft.ML.Runtime.IExceptionContext,Microsoft.ML.Runtime.EntryPoints.ModuleCatalog)"> <summary> Builds a JSON representation of all entry points and components of the <paramref name="catalog"/>. </summary> <param name="ectx">The exception context to use</param> <param name="catalog">The module catalog</param> </member> <member name="M:Microsoft.ML.Runtime.EntryPoints.JsonUtils.JsonManifestUtils.BuildComponentToken(Microsoft.ML.Runtime.IExceptionContext,Microsoft.ML.Runtime.EntryPoints.IComponentFactory,Microsoft.ML.Runtime.EntryPoints.ModuleCatalog)"> <summary> Build a token for component default value. This will look up the component in the catalog, and if it finds an entry, it will build a JSON structure that would be parsed into the default value. This is an inherently fragile setup in case when the factory is not trivial, but it will work well for 'property bag' factories that we are currently using. </summary> </member> <member name="T:Microsoft.ML.Runtime.EntryPoints.MacroUtils.TrainerKinds"> <summary> Lists the types of trainer signatures. Used by entry points and autoML system to know what types of evaluators to use for the train test / pipeline sweeper. </summary> </member> <member name="T:Microsoft.ML.Runtime.EntryPoints.OneVersusAllMacro"> <summary> This macro entrypoint implements OVA. </summary> </member> <member name="P:Microsoft.ML.Runtime.EntryPointTransformOutput.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Runtime.EntryPointTransformOutput.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Runtime.EntryPointTrainerOutput.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Data.IDataViewArrayConverter"> <summary> Create and array variable </summary> </member> <member name="P:Microsoft.ML.Data.IDataViewArrayConverter.Data"> <summary> The data sets </summary> </member> <member name="P:Microsoft.ML.Data.IDataViewArrayConverter.Output.OutputData"> <summary> The data set array </summary> </member> <member name="T:Microsoft.ML.Data.PredictorModelArrayConverter"> <summary> Create and array variable </summary> </member> <member name="P:Microsoft.ML.Data.PredictorModelArrayConverter.Model"> <summary> The models </summary> </member> <member name="P:Microsoft.ML.Data.PredictorModelArrayConverter.Output.OutputModel"> <summary> The model array </summary> </member> <member name="T:Microsoft.ML.Data.TextLoader"> <summary> Import a dataset from a text file </summary> </member> <member name="P:Microsoft.ML.Data.TextLoader.InputFile"> <summary> Location of the input file </summary> </member> <member name="P:Microsoft.ML.Data.TextLoader.CustomSchema"> <summary> Custom schema to use for parsing </summary> </member> <member name="P:Microsoft.ML.Data.TextLoader.Output.Data"> <summary> The resulting data view </summary> </member> <member name="T:Microsoft.ML.Models.AnomalyDetectionEvaluator"> <summary> Evaluates an anomaly detection scored dataset. </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.K"> <summary> Expected number of false positives </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.P"> <summary> Expected false positive rate </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.NumTopResults"> <summary> Number of top-scored predictions to display </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.Stream"> <summary> Whether to calculate metrics in one pass </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.MaxAucExamples"> <summary> The number of samples to use for AUC calculation. If 0, AUC is not computed. If -1, the whole dataset is used </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.LabelColumn"> <summary> Column to use for labels. </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.WeightColumn"> <summary> Weight column name. </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.ScoreColumn"> <summary> Score column name. </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.StratColumn"> <summary> Stratification column name. </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.Data"> <summary> The data to be used for evaluation. </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.NameColumn"> <summary> Name column name. </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.AnomalyDetectionEvaluator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="T:Microsoft.ML.Models.BinaryClassificationEvaluator"> <summary> Evaluates a binary classification scored dataset. </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.ProbabilityColumn"> <summary> Probability column name </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.Threshold"> <summary> Probability value for classification thresholding </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.UseRawScoreThreshold"> <summary> Use raw score value instead of probability for classification thresholding </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.NumRocExamples"> <summary> The number of samples to use for p/r curve generation. Specify 0 for no p/r curve generation </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.MaxAucExamples"> <summary> The number of samples to use for AUC calculation. If 0, AUC is not computed. If -1, the whole dataset is used </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.NumAuPrcExamples"> <summary> The number of samples to use for AUPRC calculation. Specify 0 for no AUPRC calculation </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.LabelColumn"> <summary> Column to use for labels. </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.WeightColumn"> <summary> Weight column name. </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.ScoreColumn"> <summary> Score column name. </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.StratColumn"> <summary> Stratification column name. </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.Data"> <summary> The data to be used for evaluation. </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.NameColumn"> <summary> Name column name. </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.Output.ConfusionMatrix"> <summary> Confusion matrix dataset </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationEvaluator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="M:Microsoft.ML.Models.BinaryClassificationEvaluator.Evaluate(Microsoft.ML.PredictionModel,Microsoft.ML.ILearningPipelineLoader)"> <summary> Computes the quality metrics for the PredictionModel using the specified data set. </summary> <param name="model"> The trained PredictionModel to be evaluated. </param> <param name="testData"> The test data that will be predicted and used to evaulate the model. </param> <returns> A BinaryClassificationMetrics instance that describes how well the model performed against the test data. </returns> </member> <member name="P:Microsoft.ML.Models.CrossValidationBinaryMacroSubGraphInput.Data"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidationBinaryMacroSubGraphOutput.Model"> <summary> The model </summary> </member> <member name="T:Microsoft.ML.Models.BinaryCrossValidator"> <summary> Cross validation for binary classification </summary> </member> <member name="P:Microsoft.ML.Models.BinaryCrossValidator.Data"> <summary> The data set </summary> </member> <member name="P:Microsoft.ML.Models.BinaryCrossValidator.Nodes"> <summary> The training subgraph </summary> </member> <member name="P:Microsoft.ML.Models.BinaryCrossValidator.Inputs"> <summary> The training subgraph inputs </summary> </member> <member name="P:Microsoft.ML.Models.BinaryCrossValidator.Outputs"> <summary> The training subgraph outputs </summary> </member> <member name="P:Microsoft.ML.Models.BinaryCrossValidator.StratificationColumn"> <summary> Column to use for stratification </summary> </member> <member name="P:Microsoft.ML.Models.BinaryCrossValidator.NumFolds"> <summary> Number of folds in k-fold cross-validation </summary> </member> <member name="P:Microsoft.ML.Models.BinaryCrossValidator.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="P:Microsoft.ML.Models.BinaryCrossValidator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.BinaryCrossValidator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.BinaryCrossValidator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.BinaryCrossValidator.Output.ConfusionMatrix"> <summary> Confusion matrix dataset </summary> </member> <member name="T:Microsoft.ML.Models.ClassificationEvaluator"> <summary> Evaluates a multi class classification scored dataset. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.OutputTopKAcc"> <summary> Output top-K accuracy. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.NumTopClassesToOutput"> <summary> Output top-K classes. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.NumClassesConfusionMatrix"> <summary> Maximum number of classes in confusion matrix. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.OutputPerClassStatistics"> <summary> Output per class statistics and confusion matrix. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.LabelColumn"> <summary> Column to use for labels. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.WeightColumn"> <summary> Weight column name. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.ScoreColumn"> <summary> Score column name. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.StratColumn"> <summary> Stratification column name. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.Data"> <summary> The data to be used for evaluation. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.NameColumn"> <summary> Name column name. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.Output.ConfusionMatrix"> <summary> Confusion matrix dataset </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationEvaluator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="M:Microsoft.ML.Models.ClassificationEvaluator.Evaluate(Microsoft.ML.PredictionModel,Microsoft.ML.ILearningPipelineLoader)"> <summary> Computes the quality metrics for the multi-class classification PredictionModel using the specified data set. </summary> <param name="model"> The trained multi-class classification PredictionModel to be evaluated. </param> <param name="testData"> The test data that will be predicted and used to evaulate the model. </param> <returns> A ClassificationMetrics instance that describes how well the model performed against the test data. </returns> </member> <member name="T:Microsoft.ML.Models.ClusterEvaluator"> <summary> Evaluates a clustering scored dataset. </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.FeatureColumn"> <summary> Features column name </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.CalculateDbi"> <summary> Calculate DBI? (time-consuming unsupervised metric) </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.NumTopClustersToOutput"> <summary> Output top K clusters </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.LabelColumn"> <summary> Column to use for labels. </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.WeightColumn"> <summary> Weight column name. </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.ScoreColumn"> <summary> Score column name. </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.StratColumn"> <summary> Stratification column name. </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.Data"> <summary> The data to be used for evaluation. </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.NameColumn"> <summary> Name column name. </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.ClusterEvaluator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidationMacroSubGraphInput.Data"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidationMacroSubGraphOutput.Model"> <summary> The model </summary> </member> <member name="T:Microsoft.ML.Models.CrossValidator"> <summary> Cross validation for general learning </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.Data"> <summary> The data set </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.TransformModel"> <summary> The transform model from the pipeline before this command. It gets included in the Output.PredictorModel. </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.Nodes"> <summary> The training subgraph </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.Inputs"> <summary> The training subgraph inputs </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.Outputs"> <summary> The training subgraph outputs </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.StratificationColumn"> <summary> Column to use for stratification </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.NumFolds"> <summary> Number of folds in k-fold cross-validation </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.Kind"> <summary> Specifies the trainer kind, which determines the evaluator to be used. </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.Output.PredictorModel"> <summary> The final model including the trained predictor model and the model from the transforms, provided as the Input.TransformModel. </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidator.Output.ConfusionMatrix"> <summary> Confusion matrix dataset </summary> </member> <member name="T:Microsoft.ML.Models.CrossValidatorDatasetSplitter"> <summary> Split the dataset into the specified number of cross-validation folds (train and test sets) </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidatorDatasetSplitter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidatorDatasetSplitter.NumFolds"> <summary> Number of folds to split into </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidatorDatasetSplitter.StratificationColumn"> <summary> Stratification column </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidatorDatasetSplitter.Output.TrainData"> <summary> Training data (one dataset per fold) </summary> </member> <member name="P:Microsoft.ML.Models.CrossValidatorDatasetSplitter.Output.TestData"> <summary> Testing data (one dataset per fold) </summary> </member> <member name="T:Microsoft.ML.Models.DatasetTransformer"> <summary> Applies a TransformModel to a dataset. </summary> </member> <member name="P:Microsoft.ML.Models.DatasetTransformer.TransformModel"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Models.DatasetTransformer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Models.DatasetTransformer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="T:Microsoft.ML.Models.FixedPlattCalibrator"> <summary> Apply a Platt calibrator with a fixed slope and offset to an input model </summary> </member> <member name="P:Microsoft.ML.Models.FixedPlattCalibrator.Slope"> <summary> The slope parameter of the calibration function 1 / (1 + exp(-slope * x + offset) </summary> </member> <member name="P:Microsoft.ML.Models.FixedPlattCalibrator.Offset"> <summary> The offset parameter of the calibration function 1 / (1 + exp(-slope * x + offset) </summary> </member> <member name="P:Microsoft.ML.Models.FixedPlattCalibrator.UncalibratedPredictorModel"> <summary> The predictor to calibrate </summary> </member> <member name="P:Microsoft.ML.Models.FixedPlattCalibrator.MaxRows"> <summary> The maximum number of examples to train the calibrator on </summary> </member> <member name="P:Microsoft.ML.Models.FixedPlattCalibrator.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Models.FixedPlattCalibrator.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Models.MultiOutputRegressionEvaluator"> <summary> Evaluates a multi output regression scored dataset. </summary> </member> <member name="P:Microsoft.ML.Models.MultiOutputRegressionEvaluator.LossFunction"> <summary> Loss function </summary> </member> <member name="P:Microsoft.ML.Models.MultiOutputRegressionEvaluator.SupressScoresAndLabels"> <summary> Supress labels and scores in per-instance outputs? </summary> </member> <member name="P:Microsoft.ML.Models.MultiOutputRegressionEvaluator.LabelColumn"> <summary> Column to use for labels. </summary> </member> <member name="P:Microsoft.ML.Models.MultiOutputRegressionEvaluator.WeightColumn"> <summary> Weight column name. </summary> </member> <member name="P:Microsoft.ML.Models.MultiOutputRegressionEvaluator.ScoreColumn"> <summary> Score column name. </summary> </member> <member name="P:Microsoft.ML.Models.MultiOutputRegressionEvaluator.StratColumn"> <summary> Stratification column name. </summary> </member> <member name="P:Microsoft.ML.Models.MultiOutputRegressionEvaluator.Data"> <summary> The data to be used for evaluation. </summary> </member> <member name="P:Microsoft.ML.Models.MultiOutputRegressionEvaluator.NameColumn"> <summary> Name column name. </summary> </member> <member name="P:Microsoft.ML.Models.MultiOutputRegressionEvaluator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.MultiOutputRegressionEvaluator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.MultiOutputRegressionEvaluator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="T:Microsoft.ML.Models.NaiveCalibrator"> <summary> Apply a Naive calibrator to an input model </summary> </member> <member name="P:Microsoft.ML.Models.NaiveCalibrator.UncalibratedPredictorModel"> <summary> The predictor to calibrate </summary> </member> <member name="P:Microsoft.ML.Models.NaiveCalibrator.MaxRows"> <summary> The maximum number of examples to train the calibrator on </summary> </member> <member name="P:Microsoft.ML.Models.NaiveCalibrator.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Models.NaiveCalibrator.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="P:Microsoft.ML.Models.OneVersusAllMacroSubGraphOutput.Model"> <summary> The predictor model for the subgraph exemplar. </summary> </member> <member name="T:Microsoft.ML.Models.OneVersusAll"> <summary> One-vs-All macro (OVA) </summary> </member> <member name="P:Microsoft.ML.Models.OneVersusAll.Nodes"> <summary> The subgraph for the binary trainer used to construct the OVA learner. This should be a TrainBinary node. </summary> </member> <member name="P:Microsoft.ML.Models.OneVersusAll.OutputForSubGraph"> <summary> The training subgraph output. </summary> </member> <member name="P:Microsoft.ML.Models.OneVersusAll.UseProbabilities"> <summary> Use probabilities in OVA combiner </summary> </member> <member name="P:Microsoft.ML.Models.OneVersusAll.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Models.OneVersusAll.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Models.OneVersusAll.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Models.OneVersusAll.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Models.OneVersusAll.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Models.OneVersusAll.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Models.OneVersusAll.Output.PredictorModel"> <summary> The trained multiclass model </summary> </member> <member name="T:Microsoft.ML.Models.OvaModelCombiner"> <summary> Combines a sequence of PredictorModels into a single model </summary> </member> <member name="P:Microsoft.ML.Models.OvaModelCombiner.ModelArray"> <summary> Input models </summary> </member> <member name="P:Microsoft.ML.Models.OvaModelCombiner.UseProbabilities"> <summary> Use probabilities from learners instead of raw values. </summary> </member> <member name="P:Microsoft.ML.Models.OvaModelCombiner.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Models.OvaModelCombiner.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Models.OvaModelCombiner.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Models.OvaModelCombiner.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Models.OvaModelCombiner.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Models.OvaModelCombiner.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Models.OvaModelCombiner.Output.PredictorModel"> <summary> Predictor model </summary> </member> <member name="T:Microsoft.ML.Models.PAVCalibrator"> <summary> Apply a PAV calibrator to an input model </summary> </member> <member name="P:Microsoft.ML.Models.PAVCalibrator.UncalibratedPredictorModel"> <summary> The predictor to calibrate </summary> </member> <member name="P:Microsoft.ML.Models.PAVCalibrator.MaxRows"> <summary> The maximum number of examples to train the calibrator on </summary> </member> <member name="P:Microsoft.ML.Models.PAVCalibrator.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Models.PAVCalibrator.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Models.PlattCalibrator"> <summary> Apply a Platt calibrator to an input model </summary> </member> <member name="P:Microsoft.ML.Models.PlattCalibrator.UncalibratedPredictorModel"> <summary> The predictor to calibrate </summary> </member> <member name="P:Microsoft.ML.Models.PlattCalibrator.MaxRows"> <summary> The maximum number of examples to train the calibrator on </summary> </member> <member name="P:Microsoft.ML.Models.PlattCalibrator.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Models.PlattCalibrator.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Models.QuantileRegressionEvaluator"> <summary> Evaluates a quantile regression scored dataset. </summary> </member> <member name="P:Microsoft.ML.Models.QuantileRegressionEvaluator.LossFunction"> <summary> Loss function </summary> </member> <member name="P:Microsoft.ML.Models.QuantileRegressionEvaluator.Index"> <summary> Quantile index to select </summary> </member> <member name="P:Microsoft.ML.Models.QuantileRegressionEvaluator.LabelColumn"> <summary> Column to use for labels. </summary> </member> <member name="P:Microsoft.ML.Models.QuantileRegressionEvaluator.WeightColumn"> <summary> Weight column name. </summary> </member> <member name="P:Microsoft.ML.Models.QuantileRegressionEvaluator.ScoreColumn"> <summary> Score column name. </summary> </member> <member name="P:Microsoft.ML.Models.QuantileRegressionEvaluator.StratColumn"> <summary> Stratification column name. </summary> </member> <member name="P:Microsoft.ML.Models.QuantileRegressionEvaluator.Data"> <summary> The data to be used for evaluation. </summary> </member> <member name="P:Microsoft.ML.Models.QuantileRegressionEvaluator.NameColumn"> <summary> Name column name. </summary> </member> <member name="P:Microsoft.ML.Models.QuantileRegressionEvaluator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.QuantileRegressionEvaluator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.QuantileRegressionEvaluator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="T:Microsoft.ML.Models.RankerEvaluator"> <summary> Evaluates a ranking scored dataset. </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.GroupIdColumn"> <summary> Column to use for the group ID </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.DcgTruncationLevel"> <summary> Maximum truncation level for computing (N)DCG </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.LabelGains"> <summary> Label relevance gains </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.LabelColumn"> <summary> Column to use for labels. </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.WeightColumn"> <summary> Weight column name. </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.ScoreColumn"> <summary> Score column name. </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.StratColumn"> <summary> Stratification column name. </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.Data"> <summary> The data to be used for evaluation. </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.NameColumn"> <summary> Name column name. </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.RankerEvaluator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="T:Microsoft.ML.Models.RegressionEvaluator"> <summary> Evaluates a regression scored dataset. </summary> </member> <member name="P:Microsoft.ML.Models.RegressionEvaluator.LossFunction"> <summary> Loss function </summary> </member> <member name="P:Microsoft.ML.Models.RegressionEvaluator.LabelColumn"> <summary> Column to use for labels. </summary> </member> <member name="P:Microsoft.ML.Models.RegressionEvaluator.WeightColumn"> <summary> Weight column name. </summary> </member> <member name="P:Microsoft.ML.Models.RegressionEvaluator.ScoreColumn"> <summary> Score column name. </summary> </member> <member name="P:Microsoft.ML.Models.RegressionEvaluator.StratColumn"> <summary> Stratification column name. </summary> </member> <member name="P:Microsoft.ML.Models.RegressionEvaluator.Data"> <summary> The data to be used for evaluation. </summary> </member> <member name="P:Microsoft.ML.Models.RegressionEvaluator.NameColumn"> <summary> Name column name. </summary> </member> <member name="P:Microsoft.ML.Models.RegressionEvaluator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.RegressionEvaluator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.RegressionEvaluator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="M:Microsoft.ML.Models.RegressionEvaluator.Evaluate(Microsoft.ML.PredictionModel,Microsoft.ML.ILearningPipelineLoader)"> <summary> Computes the quality metrics for the PredictionModel using the specified data set. </summary> <param name="model"> The trained PredictionModel to be evaluated. </param> <param name="testData"> The test data that will be predicted and used to evaulate the model. </param> <returns> A RegressionMetrics instance that describes how well the model performed against the test data. </returns> </member> <member name="T:Microsoft.ML.Models.Summarizer"> <summary> Summarize a linear regression predictor. </summary> </member> <member name="P:Microsoft.ML.Models.Summarizer.PredictorModel"> <summary> The predictor to summarize </summary> </member> <member name="P:Microsoft.ML.Models.Summarizer.Output.Summary"> <summary> The summary of a predictor </summary> </member> <member name="P:Microsoft.ML.Models.Summarizer.Output.Stats"> <summary> The training set statistics. Note that this output can be null. </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryMacroSubGraphInput.Data"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryMacroSubGraphOutput.Model"> <summary> The model </summary> </member> <member name="T:Microsoft.ML.Models.TrainTestBinaryEvaluator"> <summary> Train test for binary classification </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryEvaluator.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryEvaluator.TestingData"> <summary> The data to be used for testing </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryEvaluator.Nodes"> <summary> The training subgraph </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryEvaluator.Inputs"> <summary> The training subgraph inputs </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryEvaluator.Outputs"> <summary> The training subgraph outputs </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryEvaluator.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryEvaluator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryEvaluator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryEvaluator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestBinaryEvaluator.Output.ConfusionMatrix"> <summary> Confusion matrix dataset </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestMacroSubGraphInput.Data"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestMacroSubGraphOutput.Model"> <summary> The model </summary> </member> <member name="T:Microsoft.ML.Models.TrainTestEvaluator"> <summary> General train test for any supported evaluator </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.TestingData"> <summary> The data to be used for testing </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.TransformModel"> <summary> The aggregated transform model from the pipeline before this command, to apply to the test data, and also include in the final model, together with the predictor model. </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Nodes"> <summary> The training subgraph </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Inputs"> <summary> The training subgraph inputs </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Outputs"> <summary> The training subgraph outputs </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Kind"> <summary> Specifies the trainer kind, which determines the evaluator to be used. </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.PipelineId"> <summary> Identifies which pipeline was run for this train test. </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.IncludeTrainingMetrics"> <summary> Indicates whether to include and output training dataset metrics. </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Output.PredictorModel"> <summary> The final model including the trained predictor model and the model from the transforms, provided as the Input.TransformModel. </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Output.Warnings"> <summary> Warning dataset </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Output.OverallMetrics"> <summary> Overall metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Output.PerInstanceMetrics"> <summary> Per instance metrics dataset </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Output.ConfusionMatrix"> <summary> Confusion matrix dataset </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Output.TrainingWarnings"> <summary> Warning dataset for training </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Output.TrainingOverallMetrics"> <summary> Overall metrics dataset for training </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Output.TrainingPerInstanceMetrics"> <summary> Per instance metrics dataset for training </summary> </member> <member name="P:Microsoft.ML.Models.TrainTestEvaluator.Output.TrainingConfusionMatrix"> <summary> Confusion matrix dataset for training </summary> </member> <member name="T:Microsoft.ML.Models.BinaryClassificationMetrics"> <summary> This class contains the overall metrics computed by binary classification evaluators. </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.Auc"> <summary> Gets the area under the ROC curve. </summary> <remarks> The area under the ROC curve is equal to the probability that the classifier ranks a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative'). </remarks> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.Accuracy"> <summary> Gets the accuracy of a classifier which is the proportion of correct predictions in the test set. </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.PositivePrecision"> <summary> Gets the positive precision of a classifier which is the proportion of correctly predicted positive instances among all the positive predictions (i.e., the number of positive instances predicted as positive, divided by the total number of instances predicted as positive). </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.PositiveRecall"> <summary> Gets the positive recall of a classifier which is the proportion of correctly predicted positive instances among all the positive instances (i.e., the number of positive instances predicted as positive, divided by the total number of positive instances). </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.NegativePrecision"> <summary> Gets the negative precision of a classifier which is the proportion of correctly predicted negative instances among all the negative predictions (i.e., the number of negative instances predicted as negative, divided by the total number of instances predicted as negative). </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.NegativeRecall"> <summary> Gets the negative recall of a classifier which is the proportion of correctly predicted negative instances among all the negative instances (i.e., the number of negative instances predicted as negative, divided by the total number of negative instances). </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.LogLoss"> <summary> Gets the log-loss of the classifier. </summary> <remarks> The log-loss metric, is computed as follows: LL = - (1/m) * sum( log(p[i])) where m is the number of instances in the test set. p[i] is the probability returned by the classifier if the instance belongs to class 1, and 1 minus the probability returned by the classifier if the instance belongs to class 0. </remarks> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.LogLossReduction"> <summary> Gets the log-loss reduction (also known as relative log-loss, or reduction in information gain - RIG) of the classifier. </summary> <remarks> The log-loss reduction is scaled relative to a classifier that predicts the prior for every example: (LL(prior) - LL(classifier)) / LL(prior) This metric can be interpreted as the advantage of the classifier over a random prediction. E.g., if the RIG equals 20, it can be interpreted as "the probability of a correct prediction is 20% better than random guessing". </remarks> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.Entropy"> <summary> Gets the test-set entropy (prior Log-Loss/instance) of the classifier. </summary> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.F1Score"> <summary> Gets the F1 score of the classifier. </summary> <remarks> F1 score is the harmonic mean of precision and recall: 2 * precision * recall / (precision + recall). </remarks> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.Auprc"> <summary> Gets the area under the precision/recall curve of the classifier. </summary> <remarks> The area under the precision/recall curve is a single number summary of the information in the precision/recall curve. It is increasingly used in the machine learning community, particularly for imbalanced datasets where one class is observed more frequently than the other. On these datasets, AUPRC can highlight performance differences that are lost with AUC. </remarks> </member> <member name="P:Microsoft.ML.Models.BinaryClassificationMetrics.ConfusionMatrix"> <summary> Gets the confusion matrix, or error matrix, of the classifier. </summary> </member> <member name="T:Microsoft.ML.Models.BinaryClassificationMetrics.SerializationClass"> <summary> This class contains the public fields necessary to deserialize from IDataView. </summary> </member> <member name="T:Microsoft.ML.Models.ClassificationMetrics"> <summary> This class contains the overall metrics computed by multi-class classification evaluators. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationMetrics.AccuracyMicro"> <summary> Gets the micro-average accuracy of the model. </summary> <remarks> The micro-average is the fraction of instances predicted correctly. The micro-average metric weighs each class according to the number of instances that belong to it in the dataset. </remarks> </member> <member name="P:Microsoft.ML.Models.ClassificationMetrics.AccuracyMacro"> <summary> Gets the macro-average accuracy of the model. </summary> <remarks> The macro-average is computed by taking the average over all the classes of the fraction of correct predictions in this class (the number of correctly predicted instances in the class, divided by the total number of instances in the class). The macro-average metric gives the same weight to each class, no matter how many instances from that class the dataset contains. </remarks> </member> <member name="P:Microsoft.ML.Models.ClassificationMetrics.LogLoss"> <summary> Gets the average log-loss of the classifier. </summary> <remarks> The log-loss metric, is computed as follows: LL = - (1/m) * sum( log(p[i])) where m is the number of instances in the test set. p[i] is the probability returned by the classifier if the instance belongs to class 1, and 1 minus the probability returned by the classifier if the instance belongs to class 0. </remarks> </member> <member name="P:Microsoft.ML.Models.ClassificationMetrics.LogLossReduction"> <summary> Gets the log-loss reduction (also known as relative log-loss, or reduction in information gain - RIG) of the classifier. </summary> <remarks> The log-loss reduction is scaled relative to a classifier that predicts the prior for every example: (LL(prior) - LL(classifier)) / LL(prior) This metric can be interpreted as the advantage of the classifier over a random prediction. E.g., if the RIG equals 20, it can be interpreted as "the probability of a correct prediction is 20% better than random guessing". </remarks> </member> <member name="P:Microsoft.ML.Models.ClassificationMetrics.TopKAccuracy"> <summary> If <see cref="P:Microsoft.ML.Models.ClassificationEvaluator.OutputTopKAcc"/> was specified on the evaluator to be k, then TopKAccuracy is the relative number of examples where the true label is one of the top k predicted labels by the predictor. </summary> </member> <member name="P:Microsoft.ML.Models.ClassificationMetrics.PerClassLogLoss"> <summary> Gets the log-loss of the classifier for each class. </summary> <remarks> The log-loss metric, is computed as follows: LL = - (1/m) * sum( log(p[i])) where m is the number of instances in the test set. p[i] is the probability returned by the classifier if the instance belongs to the class, and 1 minus the probability returned by the classifier if the instance does not belong to the class. </remarks> </member> <member name="P:Microsoft.ML.Models.ClassificationMetrics.ConfusionMatrix"> <summary> Gets the confusion matrix, or error matrix, of the classifier. </summary> </member> <member name="T:Microsoft.ML.Models.ClassificationMetrics.SerializationClass"> <summary> This class contains the public fields necessary to deserialize from IDataView. </summary> </member> <member name="T:Microsoft.ML.Models.ConfusionMatrix"> <summary> The confusion matrix shows the predicted values vs the actual values. Each row of the matrix represents the instances in a predicted class while each column represents the instances in the actual class. </summary> </member> <member name="P:Microsoft.ML.Models.ConfusionMatrix.Order"> <summary> Gets the number of rows or columns in the matrix. </summary> </member> <member name="P:Microsoft.ML.Models.ConfusionMatrix.ClassNames"> <summary> Gets the class names of the confusion matrix in the same order as the rows/columns. </summary> </member> <member name="P:Microsoft.ML.Models.ConfusionMatrix.Item(System.Int32,System.Int32)"> <summary> Obtains the value at the specified indices. </summary> <param name="x"> The row index to retrieve. </param> <param name="y"> The column index to retrieve. </param> </member> <member name="P:Microsoft.ML.Models.ConfusionMatrix.Item(System.String,System.String)"> <summary> Obtains the value for the specified class names. </summary> <param name="x"> The name of the class for which row to retrieve. </param> <param name="y"> The name of the class for which column to retrieve. </param> </member> <member name="T:Microsoft.ML.Models.RegressionMetrics"> <summary> This class contains the overall metrics computed by regression evaluators. </summary> </member> <member name="P:Microsoft.ML.Models.RegressionMetrics.L1"> <summary> Gets the absolute loss of the model. </summary> <remarks> The absolute loss is defined as L1 = (1/m) * sum( abs( yi - y'i)) where m is the number of instances in the test set. y'i are the predicted labels for each instance. yi are the correct labels of each instance. </remarks> </member> <member name="P:Microsoft.ML.Models.RegressionMetrics.L2"> <summary> Gets the squared loss of the model. </summary> <remarks> The squared loss is defined as L2 = (1/m) * sum(( yi - y'i)^2) where m is the number of instances in the test set. y'i are the predicted labels for each instance. yi are the correct labels of each instance. </remarks> </member> <member name="P:Microsoft.ML.Models.RegressionMetrics.Rms"> <summary> Gets the root mean square loss (or RMC) which is the square root of the L2 loss. </summary> </member> <member name="P:Microsoft.ML.Models.RegressionMetrics.LossFn"> <summary> Gets the user defined loss function. </summary> <remarks> This is the average of a loss function defined by the user, computed over all the instances in the test set. </remarks> </member> <member name="P:Microsoft.ML.Models.RegressionMetrics.RSquared"> <summary> Gets the R squared value of the model, which is also known as the coefficient of determination​. </summary> </member> <member name="T:Microsoft.ML.Models.RegressionMetrics.SerializationClass"> <summary> This class contains the public fields necessary to deserialize from IDataView. </summary> </member> <member name="T:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier"> <summary> Train a Average perceptron. </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.LossFunction"> <summary> Loss Function </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.Calibrator"> <summary> The calibrator kind to apply to the predictor. Specify null for no calibration </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.MaxCalibrationExamples"> <summary> The maximum number of examples to use when training the calibrator </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.LearningRate"> <summary> Learning rate </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.DecreaseLearningRate"> <summary> Decrease learning rate </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.ResetWeightsAfterXExamples"> <summary> Number of examples after which weights will be reset to the current average </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.DoLazyUpdates"> <summary> Instead of updating averaged weights on every example, only update when loss is nonzero </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.L2RegularizerWeight"> <summary> L2 Regularization Weight </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.RecencyGain"> <summary> Extra weight given to more recent updates </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.RecencyGainMulti"> <summary> Whether Recency Gain is multiplicative (vs. additive) </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.Averaged"> <summary> Do averaging? </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.AveragedTolerance"> <summary> The inexactness tolerance for averaging </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.NumIterations"> <summary> Number of iterations </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.InitialWeights"> <summary> Initial Weights and bias, comma-separated </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.InitWtsDiameter"> <summary> Init weights diameter </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.Shuffle"> <summary> Whether to shuffle for each training iteration </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.StreamingCacheSize"> <summary> Size of cache when trained in Scope </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.AveragedPerceptronBinaryClassifier.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.BinaryLogisticRegressor"> <summary> Train a logistic regression binary model </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.ShowTrainingStats"> <summary> Show statistics of training examples. </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.L2Weight"> <summary> L2 regularization weight </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.L1Weight"> <summary> L1 regularization weight </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.OptTol"> <summary> Tolerance parameter for optimization convergence. Lower = slower, more accurate </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.MemorySize"> <summary> Memory size for L-BFGS. Lower=faster, less accurate </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.MaxIterations"> <summary> Maximum iterations. </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.SgdInitializationTolerance"> <summary> Run SGD to initialize LR weights, converging to this tolerance </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.Quiet"> <summary> If set to true, produce no output during training. </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.InitWtsDiameter"> <summary> Init weights diameter </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.UseThreads"> <summary> Whether or not to use threads. Default is true </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.NumThreads"> <summary> Number of threads </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.DenseOptimizer"> <summary> Force densification of the internal optimization vectors </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.EnforceNonNegativity"> <summary> Enforce non-negative weights </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.BinaryLogisticRegressor.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.FastForestBinaryClassifier"> <summary> Uses a random forest learner to perform binary classification. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.MaxTreeOutput"> <summary> Upper bound on absolute value of single tree output </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.Calibrator"> <summary> The calibrator kind to apply to the predictor. Specify null for no calibration </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.MaxCalibrationExamples"> <summary> The maximum number of examples to use when training the calibrator </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.QuantileSampleCount"> <summary> Number of labels to be sampled from each leaf to make the distribtuion </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.ParallelTrainer"> <summary> Allows to choose Parallel FastTree Learning Algorithm </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.FeatureSelectSeed"> <summary> The seed of the active feature selection </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.HistogramPoolSize"> <summary> The number of histograms in the pool (between 2 and numLeaves) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.CategoricalSplit"> <summary> Whether to do split based on multiple categorical feature values. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.MaxCategoricalGroupsPerNode"> <summary> Maximum categorical split groups to consider when splitting on a categorical feature. Split groups are a collection of split points. This is used to reduce overfitting when there many categorical features. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.MaxCategoricalSplitPoints"> <summary> Maximum categorical split points to consider when splitting on a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.MinDocsPercentageForCategoricalSplit"> <summary> Minimum categorical docs percentage in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.MinDocsForCategoricalSplit"> <summary> Minimum categorical doc count in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.Bias"> <summary> Bias for calculating gradient for each feature bin for a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.Bundling"> <summary> Bundle low population bins. Bundle.None(0): no bundling, Bundle.AggregateLowPopulation(1): Bundle low population, Bundle.Adjacent(2): Neighbor low population bundle. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.SparsifyThreshold"> <summary> Sparsity level needed to use sparse feature representation </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.FeatureFirstUsePenalty"> <summary> The feature first use penalty coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.FeatureReusePenalty"> <summary> The feature re-use penalty (regularization) coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.SoftmaxTemperature"> <summary> The temperature of the randomized softmax distribution for choosing the feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.ExecutionTimes"> <summary> Print execution time breakdown to stdout </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.NumLeaves"> <summary> The max number of leaves in each regression tree </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.MinDocumentsInLeafs"> <summary> The minimal number of documents allowed in a leaf of a regression tree, out of the subsampled data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.NumTrees"> <summary> Number of weak hypotheses in the ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.FeatureFraction"> <summary> The fraction of features (chosen randomly) to use on each iteration </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.BaggingSize"> <summary> Number of trees in each bag (0 for disabling bagging) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.BaggingTrainFraction"> <summary> Percentage of training examples used in each bag </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.SplitFraction"> <summary> The fraction of features (chosen randomly) to use on each split </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.Smoothing"> <summary> Smoothing paramter for tree regularization </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.AllowEmptyTrees"> <summary> When a root split is impossible, allow training to proceed </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.FeatureCompressionLevel"> <summary> The level of feature compression to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.CompressEnsemble"> <summary> Compress the tree Ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.MaxTreesAfterCompression"> <summary> Maximum Number of trees after compression </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.PrintTestGraph"> <summary> Print metrics graph for the first test set </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.PrintTrainValidGraph"> <summary> Print Train and Validation metrics in graph </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.TestFrequency"> <summary> Calculate metric values for train/valid/test every k rounds </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.GroupIdColumn"> <summary> Column to use for example groupId </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestBinaryClassifier.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.FastForestRegressor"> <summary> Trains a random forest to fit target values using least-squares. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.ShuffleLabels"> <summary> Shuffle the labels on every iteration. Useful probably only if using this tree as a tree leaf featurizer for multiclass. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.QuantileSampleCount"> <summary> Number of labels to be sampled from each leaf to make the distribtuion </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.ParallelTrainer"> <summary> Allows to choose Parallel FastTree Learning Algorithm </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.FeatureSelectSeed"> <summary> The seed of the active feature selection </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.HistogramPoolSize"> <summary> The number of histograms in the pool (between 2 and numLeaves) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.CategoricalSplit"> <summary> Whether to do split based on multiple categorical feature values. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.MaxCategoricalGroupsPerNode"> <summary> Maximum categorical split groups to consider when splitting on a categorical feature. Split groups are a collection of split points. This is used to reduce overfitting when there many categorical features. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.MaxCategoricalSplitPoints"> <summary> Maximum categorical split points to consider when splitting on a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.MinDocsPercentageForCategoricalSplit"> <summary> Minimum categorical docs percentage in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.MinDocsForCategoricalSplit"> <summary> Minimum categorical doc count in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.Bias"> <summary> Bias for calculating gradient for each feature bin for a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.Bundling"> <summary> Bundle low population bins. Bundle.None(0): no bundling, Bundle.AggregateLowPopulation(1): Bundle low population, Bundle.Adjacent(2): Neighbor low population bundle. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.SparsifyThreshold"> <summary> Sparsity level needed to use sparse feature representation </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.FeatureFirstUsePenalty"> <summary> The feature first use penalty coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.FeatureReusePenalty"> <summary> The feature re-use penalty (regularization) coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.SoftmaxTemperature"> <summary> The temperature of the randomized softmax distribution for choosing the feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.ExecutionTimes"> <summary> Print execution time breakdown to stdout </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.NumLeaves"> <summary> The max number of leaves in each regression tree </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.MinDocumentsInLeafs"> <summary> The minimal number of documents allowed in a leaf of a regression tree, out of the subsampled data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.NumTrees"> <summary> Number of weak hypotheses in the ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.FeatureFraction"> <summary> The fraction of features (chosen randomly) to use on each iteration </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.BaggingSize"> <summary> Number of trees in each bag (0 for disabling bagging) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.BaggingTrainFraction"> <summary> Percentage of training examples used in each bag </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.SplitFraction"> <summary> The fraction of features (chosen randomly) to use on each split </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.Smoothing"> <summary> Smoothing paramter for tree regularization </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.AllowEmptyTrees"> <summary> When a root split is impossible, allow training to proceed </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.FeatureCompressionLevel"> <summary> The level of feature compression to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.CompressEnsemble"> <summary> Compress the tree Ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.MaxTreesAfterCompression"> <summary> Maximum Number of trees after compression </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.PrintTestGraph"> <summary> Print metrics graph for the first test set </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.PrintTrainValidGraph"> <summary> Print Train and Validation metrics in graph </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.TestFrequency"> <summary> Calculate metric values for train/valid/test every k rounds </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.GroupIdColumn"> <summary> Column to use for example groupId </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastForestRegressor.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.FastTreeBinaryClassifier"> <summary> Uses a logit-boost boosted tree learner to perform binary classification. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.UnbalancedSets"> <summary> Should we use derivatives optimized for unbalanced sets </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.BestStepRankingRegressionTrees"> <summary> Use best regression step trees? </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.UseLineSearch"> <summary> Should we use line search for a step size </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.NumPostBracketSteps"> <summary> Number of post-bracket line search steps </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.MinStepSize"> <summary> Minimum line search step size </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.OptimizationAlgorithm"> <summary> Optimization algorithm to be used (GradientDescent, AcceleratedGradientDescent) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.EarlyStoppingRule"> <summary> Early stopping rule. (Validation set (/valid) is required.) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.EarlyStoppingMetrics"> <summary> Early stopping metrics. (For regression, 1: L1, 2:L2; for ranking, 1:NDCG@1, 3:NDCG@3) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.EnablePruning"> <summary> Enable post-training pruning to avoid overfitting. (a validation set is required) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.UseTolerantPruning"> <summary> Use window and tolerance for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.PruningThreshold"> <summary> The tolerance threshold for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.PruningWindowSize"> <summary> The moving window size for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.LearningRates"> <summary> The learning rate </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.Shrinkage"> <summary> Shrinkage </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.DropoutRate"> <summary> Dropout rate for tree regularization </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.GetDerivativesSampleRate"> <summary> Sample each query 1 in k times in the GetDerivatives function </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.WriteLastEnsemble"> <summary> Write the last ensemble instead of the one determined by early stopping </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.MaxTreeOutput"> <summary> Upper bound on absolute value of single tree output </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.RandomStart"> <summary> Training starts from random ordering (determined by /r1) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.FilterZeroLambdas"> <summary> Filter zero lambdas during training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.BaselineScoresFormula"> <summary> Freeform defining the scores that should be used as the baseline ranker </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.BaselineAlphaRisk"> <summary> Baseline alpha for tradeoffs of risk (0 is normal training) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.PositionDiscountFreeform"> <summary> The discount freeform which specifies the per position discounts of documents in a query (uses a single variable P for position where P=0 is first position) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.ParallelTrainer"> <summary> Allows to choose Parallel FastTree Learning Algorithm </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.FeatureSelectSeed"> <summary> The seed of the active feature selection </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.HistogramPoolSize"> <summary> The number of histograms in the pool (between 2 and numLeaves) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.CategoricalSplit"> <summary> Whether to do split based on multiple categorical feature values. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.MaxCategoricalGroupsPerNode"> <summary> Maximum categorical split groups to consider when splitting on a categorical feature. Split groups are a collection of split points. This is used to reduce overfitting when there many categorical features. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.MaxCategoricalSplitPoints"> <summary> Maximum categorical split points to consider when splitting on a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.MinDocsPercentageForCategoricalSplit"> <summary> Minimum categorical docs percentage in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.MinDocsForCategoricalSplit"> <summary> Minimum categorical doc count in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.Bias"> <summary> Bias for calculating gradient for each feature bin for a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.Bundling"> <summary> Bundle low population bins. Bundle.None(0): no bundling, Bundle.AggregateLowPopulation(1): Bundle low population, Bundle.Adjacent(2): Neighbor low population bundle. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.SparsifyThreshold"> <summary> Sparsity level needed to use sparse feature representation </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.FeatureFirstUsePenalty"> <summary> The feature first use penalty coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.FeatureReusePenalty"> <summary> The feature re-use penalty (regularization) coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.SoftmaxTemperature"> <summary> The temperature of the randomized softmax distribution for choosing the feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.ExecutionTimes"> <summary> Print execution time breakdown to stdout </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.NumLeaves"> <summary> The max number of leaves in each regression tree </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.MinDocumentsInLeafs"> <summary> The minimal number of documents allowed in a leaf of a regression tree, out of the subsampled data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.NumTrees"> <summary> Number of weak hypotheses in the ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.FeatureFraction"> <summary> The fraction of features (chosen randomly) to use on each iteration </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.BaggingSize"> <summary> Number of trees in each bag (0 for disabling bagging) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.BaggingTrainFraction"> <summary> Percentage of training examples used in each bag </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.SplitFraction"> <summary> The fraction of features (chosen randomly) to use on each split </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.Smoothing"> <summary> Smoothing paramter for tree regularization </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.AllowEmptyTrees"> <summary> When a root split is impossible, allow training to proceed </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.FeatureCompressionLevel"> <summary> The level of feature compression to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.CompressEnsemble"> <summary> Compress the tree Ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.MaxTreesAfterCompression"> <summary> Maximum Number of trees after compression </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.PrintTestGraph"> <summary> Print metrics graph for the first test set </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.PrintTrainValidGraph"> <summary> Print Train and Validation metrics in graph </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.TestFrequency"> <summary> Calculate metric values for train/valid/test every k rounds </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.GroupIdColumn"> <summary> Column to use for example groupId </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeBinaryClassifier.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.FastTreeRanker"> <summary> Trains gradient boosted decision trees to the LambdaRank quasi-gradient. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.CustomGains"> <summary> Comma seperated list of gains associated to each relevance label. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.TrainDcg"> <summary> Train DCG instead of NDCG </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.SortingAlgorithm"> <summary> The sorting algorithm to use for DCG and LambdaMart calculations [DescendingStablePessimistic/DescendingStable/DescendingReverse/DescendingDotNet] </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.LambdaMartMaxTruncation"> <summary> max-NDCG truncation to use in the Lambda Mart algorithm </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.ShiftedNdcg"> <summary> Use shifted NDCG </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.CostFunctionParam"> <summary> Cost function parameter (w/c) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.DistanceWeight2"> <summary> Distance weight 2 adjustment to cost </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.NormalizeQueryLambdas"> <summary> Normalize query lambdas </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.BestStepRankingRegressionTrees"> <summary> Use best regression step trees? </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.UseLineSearch"> <summary> Should we use line search for a step size </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.NumPostBracketSteps"> <summary> Number of post-bracket line search steps </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.MinStepSize"> <summary> Minimum line search step size </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.OptimizationAlgorithm"> <summary> Optimization algorithm to be used (GradientDescent, AcceleratedGradientDescent) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.EarlyStoppingRule"> <summary> Early stopping rule. (Validation set (/valid) is required.) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.EarlyStoppingMetrics"> <summary> Early stopping metrics. (For regression, 1: L1, 2:L2; for ranking, 1:NDCG@1, 3:NDCG@3) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.EnablePruning"> <summary> Enable post-training pruning to avoid overfitting. (a validation set is required) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.UseTolerantPruning"> <summary> Use window and tolerance for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.PruningThreshold"> <summary> The tolerance threshold for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.PruningWindowSize"> <summary> The moving window size for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.LearningRates"> <summary> The learning rate </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.Shrinkage"> <summary> Shrinkage </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.DropoutRate"> <summary> Dropout rate for tree regularization </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.GetDerivativesSampleRate"> <summary> Sample each query 1 in k times in the GetDerivatives function </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.WriteLastEnsemble"> <summary> Write the last ensemble instead of the one determined by early stopping </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.MaxTreeOutput"> <summary> Upper bound on absolute value of single tree output </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.RandomStart"> <summary> Training starts from random ordering (determined by /r1) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.FilterZeroLambdas"> <summary> Filter zero lambdas during training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.BaselineScoresFormula"> <summary> Freeform defining the scores that should be used as the baseline ranker </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.BaselineAlphaRisk"> <summary> Baseline alpha for tradeoffs of risk (0 is normal training) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.PositionDiscountFreeform"> <summary> The discount freeform which specifies the per position discounts of documents in a query (uses a single variable P for position where P=0 is first position) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.ParallelTrainer"> <summary> Allows to choose Parallel FastTree Learning Algorithm </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.FeatureSelectSeed"> <summary> The seed of the active feature selection </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.HistogramPoolSize"> <summary> The number of histograms in the pool (between 2 and numLeaves) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.CategoricalSplit"> <summary> Whether to do split based on multiple categorical feature values. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.MaxCategoricalGroupsPerNode"> <summary> Maximum categorical split groups to consider when splitting on a categorical feature. Split groups are a collection of split points. This is used to reduce overfitting when there many categorical features. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.MaxCategoricalSplitPoints"> <summary> Maximum categorical split points to consider when splitting on a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.MinDocsPercentageForCategoricalSplit"> <summary> Minimum categorical docs percentage in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.MinDocsForCategoricalSplit"> <summary> Minimum categorical doc count in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.Bias"> <summary> Bias for calculating gradient for each feature bin for a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.Bundling"> <summary> Bundle low population bins. Bundle.None(0): no bundling, Bundle.AggregateLowPopulation(1): Bundle low population, Bundle.Adjacent(2): Neighbor low population bundle. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.SparsifyThreshold"> <summary> Sparsity level needed to use sparse feature representation </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.FeatureFirstUsePenalty"> <summary> The feature first use penalty coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.FeatureReusePenalty"> <summary> The feature re-use penalty (regularization) coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.SoftmaxTemperature"> <summary> The temperature of the randomized softmax distribution for choosing the feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.ExecutionTimes"> <summary> Print execution time breakdown to stdout </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.NumLeaves"> <summary> The max number of leaves in each regression tree </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.MinDocumentsInLeafs"> <summary> The minimal number of documents allowed in a leaf of a regression tree, out of the subsampled data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.NumTrees"> <summary> Number of weak hypotheses in the ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.FeatureFraction"> <summary> The fraction of features (chosen randomly) to use on each iteration </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.BaggingSize"> <summary> Number of trees in each bag (0 for disabling bagging) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.BaggingTrainFraction"> <summary> Percentage of training examples used in each bag </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.SplitFraction"> <summary> The fraction of features (chosen randomly) to use on each split </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.Smoothing"> <summary> Smoothing paramter for tree regularization </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.AllowEmptyTrees"> <summary> When a root split is impossible, allow training to proceed </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.FeatureCompressionLevel"> <summary> The level of feature compression to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.CompressEnsemble"> <summary> Compress the tree Ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.MaxTreesAfterCompression"> <summary> Maximum Number of trees after compression </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.PrintTestGraph"> <summary> Print metrics graph for the first test set </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.PrintTrainValidGraph"> <summary> Print Train and Validation metrics in graph </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.TestFrequency"> <summary> Calculate metric values for train/valid/test every k rounds </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.GroupIdColumn"> <summary> Column to use for example groupId </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRanker.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.FastTreeRegressor"> <summary> Trains gradient boosted decision trees to fit target values using least-squares. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.BestStepRankingRegressionTrees"> <summary> Use best regression step trees? </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.UseLineSearch"> <summary> Should we use line search for a step size </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.NumPostBracketSteps"> <summary> Number of post-bracket line search steps </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.MinStepSize"> <summary> Minimum line search step size </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.OptimizationAlgorithm"> <summary> Optimization algorithm to be used (GradientDescent, AcceleratedGradientDescent) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.EarlyStoppingRule"> <summary> Early stopping rule. (Validation set (/valid) is required.) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.EarlyStoppingMetrics"> <summary> Early stopping metrics. (For regression, 1: L1, 2:L2; for ranking, 1:NDCG@1, 3:NDCG@3) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.EnablePruning"> <summary> Enable post-training pruning to avoid overfitting. (a validation set is required) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.UseTolerantPruning"> <summary> Use window and tolerance for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.PruningThreshold"> <summary> The tolerance threshold for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.PruningWindowSize"> <summary> The moving window size for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.LearningRates"> <summary> The learning rate </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.Shrinkage"> <summary> Shrinkage </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.DropoutRate"> <summary> Dropout rate for tree regularization </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.GetDerivativesSampleRate"> <summary> Sample each query 1 in k times in the GetDerivatives function </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.WriteLastEnsemble"> <summary> Write the last ensemble instead of the one determined by early stopping </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.MaxTreeOutput"> <summary> Upper bound on absolute value of single tree output </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.RandomStart"> <summary> Training starts from random ordering (determined by /r1) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.FilterZeroLambdas"> <summary> Filter zero lambdas during training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.BaselineScoresFormula"> <summary> Freeform defining the scores that should be used as the baseline ranker </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.BaselineAlphaRisk"> <summary> Baseline alpha for tradeoffs of risk (0 is normal training) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.PositionDiscountFreeform"> <summary> The discount freeform which specifies the per position discounts of documents in a query (uses a single variable P for position where P=0 is first position) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.ParallelTrainer"> <summary> Allows to choose Parallel FastTree Learning Algorithm </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.FeatureSelectSeed"> <summary> The seed of the active feature selection </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.HistogramPoolSize"> <summary> The number of histograms in the pool (between 2 and numLeaves) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.CategoricalSplit"> <summary> Whether to do split based on multiple categorical feature values. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.MaxCategoricalGroupsPerNode"> <summary> Maximum categorical split groups to consider when splitting on a categorical feature. Split groups are a collection of split points. This is used to reduce overfitting when there many categorical features. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.MaxCategoricalSplitPoints"> <summary> Maximum categorical split points to consider when splitting on a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.MinDocsPercentageForCategoricalSplit"> <summary> Minimum categorical docs percentage in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.MinDocsForCategoricalSplit"> <summary> Minimum categorical doc count in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.Bias"> <summary> Bias for calculating gradient for each feature bin for a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.Bundling"> <summary> Bundle low population bins. Bundle.None(0): no bundling, Bundle.AggregateLowPopulation(1): Bundle low population, Bundle.Adjacent(2): Neighbor low population bundle. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.SparsifyThreshold"> <summary> Sparsity level needed to use sparse feature representation </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.FeatureFirstUsePenalty"> <summary> The feature first use penalty coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.FeatureReusePenalty"> <summary> The feature re-use penalty (regularization) coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.SoftmaxTemperature"> <summary> The temperature of the randomized softmax distribution for choosing the feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.ExecutionTimes"> <summary> Print execution time breakdown to stdout </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.NumLeaves"> <summary> The max number of leaves in each regression tree </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.MinDocumentsInLeafs"> <summary> The minimal number of documents allowed in a leaf of a regression tree, out of the subsampled data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.NumTrees"> <summary> Number of weak hypotheses in the ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.FeatureFraction"> <summary> The fraction of features (chosen randomly) to use on each iteration </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.BaggingSize"> <summary> Number of trees in each bag (0 for disabling bagging) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.BaggingTrainFraction"> <summary> Percentage of training examples used in each bag </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.SplitFraction"> <summary> The fraction of features (chosen randomly) to use on each split </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.Smoothing"> <summary> Smoothing paramter for tree regularization </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.AllowEmptyTrees"> <summary> When a root split is impossible, allow training to proceed </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.FeatureCompressionLevel"> <summary> The level of feature compression to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.CompressEnsemble"> <summary> Compress the tree Ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.MaxTreesAfterCompression"> <summary> Maximum Number of trees after compression </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.PrintTestGraph"> <summary> Print metrics graph for the first test set </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.PrintTrainValidGraph"> <summary> Print Train and Validation metrics in graph </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.TestFrequency"> <summary> Calculate metric values for train/valid/test every k rounds </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.GroupIdColumn"> <summary> Column to use for example groupId </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeRegressor.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.FastTreeTweedieRegressor"> <summary> Trains gradient boosted decision trees to fit target values using a Tweedie loss function. This learner is a generalization of Poisson, compound Poisson, and gamma regression. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.Index"> <summary> Index parameter for the Tweedie distribution, in the range [1, 2]. 1 is Poisson loss, 2 is gamma loss, and intermediate values are compound Poisson loss. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.BestStepRankingRegressionTrees"> <summary> Use best regression step trees? </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.UseLineSearch"> <summary> Should we use line search for a step size </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.NumPostBracketSteps"> <summary> Number of post-bracket line search steps </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.MinStepSize"> <summary> Minimum line search step size </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.OptimizationAlgorithm"> <summary> Optimization algorithm to be used (GradientDescent, AcceleratedGradientDescent) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.EarlyStoppingRule"> <summary> Early stopping rule. (Validation set (/valid) is required.) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.EarlyStoppingMetrics"> <summary> Early stopping metrics. (For regression, 1: L1, 2:L2; for ranking, 1:NDCG@1, 3:NDCG@3) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.EnablePruning"> <summary> Enable post-training pruning to avoid overfitting. (a validation set is required) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.UseTolerantPruning"> <summary> Use window and tolerance for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.PruningThreshold"> <summary> The tolerance threshold for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.PruningWindowSize"> <summary> The moving window size for pruning </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.LearningRates"> <summary> The learning rate </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.Shrinkage"> <summary> Shrinkage </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.DropoutRate"> <summary> Dropout rate for tree regularization </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.GetDerivativesSampleRate"> <summary> Sample each query 1 in k times in the GetDerivatives function </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.WriteLastEnsemble"> <summary> Write the last ensemble instead of the one determined by early stopping </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.MaxTreeOutput"> <summary> Upper bound on absolute value of single tree output </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.RandomStart"> <summary> Training starts from random ordering (determined by /r1) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.FilterZeroLambdas"> <summary> Filter zero lambdas during training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.BaselineScoresFormula"> <summary> Freeform defining the scores that should be used as the baseline ranker </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.BaselineAlphaRisk"> <summary> Baseline alpha for tradeoffs of risk (0 is normal training) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.PositionDiscountFreeform"> <summary> The discount freeform which specifies the per position discounts of documents in a query (uses a single variable P for position where P=0 is first position) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.ParallelTrainer"> <summary> Allows to choose Parallel FastTree Learning Algorithm </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.FeatureSelectSeed"> <summary> The seed of the active feature selection </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.HistogramPoolSize"> <summary> The number of histograms in the pool (between 2 and numLeaves) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.CategoricalSplit"> <summary> Whether to do split based on multiple categorical feature values. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.MaxCategoricalGroupsPerNode"> <summary> Maximum categorical split groups to consider when splitting on a categorical feature. Split groups are a collection of split points. This is used to reduce overfitting when there many categorical features. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.MaxCategoricalSplitPoints"> <summary> Maximum categorical split points to consider when splitting on a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.MinDocsPercentageForCategoricalSplit"> <summary> Minimum categorical docs percentage in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.MinDocsForCategoricalSplit"> <summary> Minimum categorical doc count in a bin to consider for a split. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.Bias"> <summary> Bias for calculating gradient for each feature bin for a categorical feature. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.Bundling"> <summary> Bundle low population bins. Bundle.None(0): no bundling, Bundle.AggregateLowPopulation(1): Bundle low population, Bundle.Adjacent(2): Neighbor low population bundle. </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.SparsifyThreshold"> <summary> Sparsity level needed to use sparse feature representation </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.FeatureFirstUsePenalty"> <summary> The feature first use penalty coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.FeatureReusePenalty"> <summary> The feature re-use penalty (regularization) coefficient </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.SoftmaxTemperature"> <summary> The temperature of the randomized softmax distribution for choosing the feature </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.ExecutionTimes"> <summary> Print execution time breakdown to stdout </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.NumLeaves"> <summary> The max number of leaves in each regression tree </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.MinDocumentsInLeafs"> <summary> The minimal number of documents allowed in a leaf of a regression tree, out of the subsampled data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.NumTrees"> <summary> Number of weak hypotheses in the ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.FeatureFraction"> <summary> The fraction of features (chosen randomly) to use on each iteration </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.BaggingSize"> <summary> Number of trees in each bag (0 for disabling bagging) </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.BaggingTrainFraction"> <summary> Percentage of training examples used in each bag </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.SplitFraction"> <summary> The fraction of features (chosen randomly) to use on each split </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.Smoothing"> <summary> Smoothing paramter for tree regularization </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.AllowEmptyTrees"> <summary> When a root split is impossible, allow training to proceed </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.FeatureCompressionLevel"> <summary> The level of feature compression to use </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.CompressEnsemble"> <summary> Compress the tree Ensemble </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.MaxTreesAfterCompression"> <summary> Maximum Number of trees after compression </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.PrintTestGraph"> <summary> Print metrics graph for the first test set </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.PrintTrainValidGraph"> <summary> Print Train and Validation metrics in graph </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.TestFrequency"> <summary> Calculate metric values for train/valid/test every k rounds </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.GroupIdColumn"> <summary> Column to use for example groupId </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.FastTreeTweedieRegressor.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier"> <summary> Trains a gradient boosted stump per feature, on all features simultaneously, to fit target values using least-squares. It mantains no interactions between features. </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.UnbalancedSets"> <summary> Should we use derivatives optimized for unbalanced sets </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.Calibrator"> <summary> The calibrator kind to apply to the predictor. Specify null for no calibration </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.MaxCalibrationExamples"> <summary> The maximum number of examples to use when training the calibrator </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.NumIterations"> <summary> Total number of iterations over all features </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.LearningRates"> <summary> The learning rate </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.MaxOutput"> <summary> Upper bound on absolute value of single output </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.GetDerivativesSampleRate"> <summary> Sample each query 1 in k times in the GetDerivatives function </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.MinDocuments"> <summary> Minimum number of training instances required to form a partition </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelBinaryClassifier.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor"> <summary> Trains a gradient boosted stump per feature, on all features simultaneously, to fit target values using least-squares. It mantains no interactions between features. </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.EntropyCoefficient"> <summary> The entropy (regularization) coefficient between 0 and 1 </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.GainConfidenceLevel"> <summary> Tree fitting gain confidence requirement (should be in the range [0,1) ). </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.NumIterations"> <summary> Total number of iterations over all features </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.NumThreads"> <summary> The number of threads to use </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.LearningRates"> <summary> The learning rate </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.DiskTranspose"> <summary> Whether to utilize the disk or the data's native transposition facilities (where applicable) when performing the transpose </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.MaxBins"> <summary> Maximum number of distinct values (bins) per feature </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.MaxOutput"> <summary> Upper bound on absolute value of single output </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.GetDerivativesSampleRate"> <summary> Sample each query 1 in k times in the GetDerivatives function </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.RngSeed"> <summary> The seed of the random number generator </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.MinDocuments"> <summary> Minimum number of training instances required to form a partition </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.FeatureFlocks"> <summary> Whether to collectivize features during dataset preparation to speed up training </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.GeneralizedAdditiveModelRegressor.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.LinearSvmBinaryClassifier"> <summary> Train a linear SVM. </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.Lambda"> <summary> Regularizer constant </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.BatchSize"> <summary> Batch size </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.PerformProjection"> <summary> Perform projection to unit-ball? Typically used with batch size > 1. </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.NoBias"> <summary> No bias </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.Calibrator"> <summary> The calibrator kind to apply to the predictor. Specify null for no calibration </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.MaxCalibrationExamples"> <summary> The maximum number of examples to use when training the calibrator </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.NumIterations"> <summary> Number of iterations </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.InitialWeights"> <summary> Initial Weights and bias, comma-separated </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.InitWtsDiameter"> <summary> Init weights diameter </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.Shuffle"> <summary> Whether to shuffle for each training iteration </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.StreamingCacheSize"> <summary> Size of cache when trained in Scope </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.LinearSvmBinaryClassifier.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.LogisticRegressor"> <summary> Train a logistic regression multi class model </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.ShowTrainingStats"> <summary> Show statistics of training examples. </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.L2Weight"> <summary> L2 regularization weight </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.L1Weight"> <summary> L1 regularization weight </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.OptTol"> <summary> Tolerance parameter for optimization convergence. Lower = slower, more accurate </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.MemorySize"> <summary> Memory size for L-BFGS. Lower=faster, less accurate </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.MaxIterations"> <summary> Maximum iterations. </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.SgdInitializationTolerance"> <summary> Run SGD to initialize LR weights, converging to this tolerance </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.Quiet"> <summary> If set to true, produce no output during training. </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.InitWtsDiameter"> <summary> Init weights diameter </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.UseThreads"> <summary> Whether or not to use threads. Default is true </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.NumThreads"> <summary> Number of threads </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.DenseOptimizer"> <summary> Force densification of the internal optimization vectors </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.EnforceNonNegativity"> <summary> Enforce non-negative weights </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.LogisticRegressor.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.NaiveBayesClassifier"> <summary> Train a MultiClassNaiveBayesTrainer. </summary> </member> <member name="P:Microsoft.ML.Trainers.NaiveBayesClassifier.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.NaiveBayesClassifier.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.NaiveBayesClassifier.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.NaiveBayesClassifier.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.NaiveBayesClassifier.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.NaiveBayesClassifier.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.OnlineGradientDescentRegressor"> <summary> Train a Online gradient descent perceptron. </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.LossFunction"> <summary> Loss Function </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.LearningRate"> <summary> Learning rate </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.DecreaseLearningRate"> <summary> Decrease learning rate </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.ResetWeightsAfterXExamples"> <summary> Number of examples after which weights will be reset to the current average </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.DoLazyUpdates"> <summary> Instead of updating averaged weights on every example, only update when loss is nonzero </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.L2RegularizerWeight"> <summary> L2 Regularization Weight </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.RecencyGain"> <summary> Extra weight given to more recent updates </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.RecencyGainMulti"> <summary> Whether Recency Gain is multiplicative (vs. additive) </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.Averaged"> <summary> Do averaging? </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.AveragedTolerance"> <summary> The inexactness tolerance for averaging </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.NumIterations"> <summary> Number of iterations </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.InitialWeights"> <summary> Initial Weights and bias, comma-separated </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.InitWtsDiameter"> <summary> Init weights diameter </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.Shuffle"> <summary> Whether to shuffle for each training iteration </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.StreamingCacheSize"> <summary> Size of cache when trained in Scope </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.OnlineGradientDescentRegressor.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.OrdinaryLeastSquaresRegressor"> <summary> Train an OLS regression model. </summary> </member> <member name="P:Microsoft.ML.Trainers.OrdinaryLeastSquaresRegressor.L2Weight"> <summary> L2 regularization weight </summary> </member> <member name="P:Microsoft.ML.Trainers.OrdinaryLeastSquaresRegressor.PerParameterSignificance"> <summary> Whether to calculate per parameter significance statistics </summary> </member> <member name="P:Microsoft.ML.Trainers.OrdinaryLeastSquaresRegressor.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.OrdinaryLeastSquaresRegressor.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.OrdinaryLeastSquaresRegressor.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.OrdinaryLeastSquaresRegressor.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.OrdinaryLeastSquaresRegressor.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.OrdinaryLeastSquaresRegressor.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.OrdinaryLeastSquaresRegressor.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.PoissonRegressor"> <summary> Train an Poisson regression model. </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.L2Weight"> <summary> L2 regularization weight </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.L1Weight"> <summary> L1 regularization weight </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.OptTol"> <summary> Tolerance parameter for optimization convergence. Lower = slower, more accurate </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.MemorySize"> <summary> Memory size for L-BFGS. Lower=faster, less accurate </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.MaxIterations"> <summary> Maximum iterations. </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.SgdInitializationTolerance"> <summary> Run SGD to initialize LR weights, converging to this tolerance </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.Quiet"> <summary> If set to true, produce no output during training. </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.InitWtsDiameter"> <summary> Init weights diameter </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.UseThreads"> <summary> Whether or not to use threads. Default is true </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.NumThreads"> <summary> Number of threads </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.DenseOptimizer"> <summary> Force densification of the internal optimization vectors </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.EnforceNonNegativity"> <summary> Enforce non-negative weights </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.PoissonRegressor.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier"> <summary> Train an SDCA binary model. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.LossFunction"> <summary> Loss Function </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.PositiveInstanceWeight"> <summary> Apply weight to the positive class, for imbalanced data </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.Calibrator"> <summary> The calibrator kind to apply to the predictor. Specify null for no calibration </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.MaxCalibrationExamples"> <summary> The maximum number of examples to use when training the calibrator </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.L2Const"> <summary> L2 regularizer constant. By default the l2 constant is automatically inferred based on data set. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.L1Threshold"> <summary> L1 soft threshold (L1/L2). Note that it is easier to control and sweep using the threshold parameter than the raw L1-regularizer constant. By default the l1 threshold is automatically inferred based on data set. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.NumThreads"> <summary> Degree of lock-free parallelism. Defaults to automatic. Determinism not guaranteed. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.ConvergenceTolerance"> <summary> The tolerance for the ratio between duality gap and primal loss for convergence checking. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.MaxIterations"> <summary> Maximum number of iterations; set to 1 to simulate online learning. Defaults to automatic. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.Shuffle"> <summary> Shuffle data every epoch? </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.CheckFrequency"> <summary> Convergence check frequency (in terms of number of iterations). Set as negative or zero for not checking at all. If left blank, it defaults to check after every 'numThreads' iterations. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.BiasLearningRate"> <summary> The learning rate for adjusting bias from being regularized. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentBinaryClassifier.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier"> <summary> Train an SDCA multi class model </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.LossFunction"> <summary> Loss Function </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.L2Const"> <summary> L2 regularizer constant. By default the l2 constant is automatically inferred based on data set. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.L1Threshold"> <summary> L1 soft threshold (L1/L2). Note that it is easier to control and sweep using the threshold parameter than the raw L1-regularizer constant. By default the l1 threshold is automatically inferred based on data set. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.NumThreads"> <summary> Degree of lock-free parallelism. Defaults to automatic. Determinism not guaranteed. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.ConvergenceTolerance"> <summary> The tolerance for the ratio between duality gap and primal loss for convergence checking. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.MaxIterations"> <summary> Maximum number of iterations; set to 1 to simulate online learning. Defaults to automatic. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.Shuffle"> <summary> Shuffle data every epoch? </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.CheckFrequency"> <summary> Convergence check frequency (in terms of number of iterations). Set as negative or zero for not checking at all. If left blank, it defaults to check after every 'numThreads' iterations. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.BiasLearningRate"> <summary> The learning rate for adjusting bias from being regularized. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentClassifier.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor"> <summary> Train an SDCA regression model </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.LossFunction"> <summary> Loss Function </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.L2Const"> <summary> L2 regularizer constant. By default the l2 constant is automatically inferred based on data set. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.L1Threshold"> <summary> L1 soft threshold (L1/L2). Note that it is easier to control and sweep using the threshold parameter than the raw L1-regularizer constant. By default the l1 threshold is automatically inferred based on data set. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.NumThreads"> <summary> Degree of lock-free parallelism. Defaults to automatic. Determinism not guaranteed. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.ConvergenceTolerance"> <summary> The tolerance for the ratio between duality gap and primal loss for convergence checking. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.MaxIterations"> <summary> Maximum number of iterations; set to 1 to simulate online learning. Defaults to automatic. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.Shuffle"> <summary> Shuffle data every epoch? </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.CheckFrequency"> <summary> Convergence check frequency (in terms of number of iterations). Set as negative or zero for not checking at all. If left blank, it defaults to check after every 'numThreads' iterations. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.BiasLearningRate"> <summary> The learning rate for adjusting bias from being regularized. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticDualCoordinateAscentRegressor.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier"> <summary> Train an Hogwild SGD binary model. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.LossFunction"> <summary> Loss Function </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.L2Const"> <summary> L2 regularizer constant </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.NumThreads"> <summary> Degree of lock-free parallelism. Defaults to automatic depending on data sparseness. Determinism not guaranteed. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.ConvergenceTolerance"> <summary> Exponential moving averaged improvement tolerance for convergence </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.MaxIterations"> <summary> Maximum number of iterations; set to 1 to simulate online learning. </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.InitLearningRate"> <summary> Initial learning rate (only used by SGD) </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.Shuffle"> <summary> Shuffle data every epoch? </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.PositiveInstanceWeight"> <summary> Apply weight to the positive class, for imbalanced data </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.CheckFrequency"> <summary> Convergence check frequency (in terms of number of iterations). Default equals number of threads </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.Calibrator"> <summary> The calibrator kind to apply to the predictor. Specify null for no calibration </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.MaxCalibrationExamples"> <summary> The maximum number of examples to use when training the calibrator </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.WeightColumn"> <summary> Column to use for example weight </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.TrainingData"> <summary> The data to be used for training </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.FeatureColumn"> <summary> Column to use for features </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.NormalizeFeatures"> <summary> Normalize option for the feature column </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.Caching"> <summary> Whether learner should cache input training data </summary> </member> <member name="P:Microsoft.ML.Trainers.StochasticGradientDescentBinaryClassifier.Output.PredictorModel"> <summary> The trained model </summary> </member> <member name="T:Microsoft.ML.Transforms.ApproximateBootstrapSampler"> <summary> Approximate bootstrap sampling. </summary> </member> <member name="P:Microsoft.ML.Transforms.ApproximateBootstrapSampler.Complement"> <summary> Whether this is the out-of-bag sample, that is, all those rows that are not selected by the transform. </summary> </member> <member name="P:Microsoft.ML.Transforms.ApproximateBootstrapSampler.Seed"> <summary> The random seed. If unspecified random state will be instead derived from the environment. </summary> </member> <member name="P:Microsoft.ML.Transforms.ApproximateBootstrapSampler.ShuffleInput"> <summary> Whether we should attempt to shuffle the source data. By default on, but can be turned off for efficiency. </summary> </member> <member name="P:Microsoft.ML.Transforms.ApproximateBootstrapSampler.PoolSize"> <summary> When shuffling the output, the number of output rows to keep in that pool. Note that shuffling of output is completely distinct from shuffling of input. </summary> </member> <member name="P:Microsoft.ML.Transforms.ApproximateBootstrapSampler.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ApproximateBootstrapSampler.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ApproximateBootstrapSampler.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.BinaryPredictionScoreColumnsRenamer"> <summary> For binary prediction, it renames the PredictedLabel and Score columns to include the name of the positive class. </summary> </member> <member name="P:Microsoft.ML.Transforms.BinaryPredictionScoreColumnsRenamer.PredictorModel"> <summary> The predictor model used in scoring </summary> </member> <member name="P:Microsoft.ML.Transforms.BinaryPredictionScoreColumnsRenamer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.BinaryPredictionScoreColumnsRenamer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.BinaryPredictionScoreColumnsRenamer.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformBinColumn.NumBins"> <summary> Max number of bins, power of 2 recommended </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformBinColumn.FixZero"> <summary> Whether to map zero to zero, preserving sparsity </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformBinColumn.MaxTrainingExamples"> <summary> Max number of examples used to train the normalizer </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformBinColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformBinColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.BinNormalizer"> <summary> The values are assigned into equidensity bins and a value is mapped to its bin_number/number_of_bins. </summary> </member> <member name="P:Microsoft.ML.Transforms.BinNormalizer.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.BinNormalizer.NumBins"> <summary> Max number of bins, power of 2 recommended </summary> </member> <member name="P:Microsoft.ML.Transforms.BinNormalizer.FixZero"> <summary> Whether to map zero to zero, preserving sparsity </summary> </member> <member name="P:Microsoft.ML.Transforms.BinNormalizer.MaxTrainingExamples"> <summary> Max number of examples used to train the normalizer </summary> </member> <member name="P:Microsoft.ML.Transforms.BinNormalizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.BinNormalizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.BinNormalizer.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashTransformColumn.HashBits"> <summary> The number of bits to hash into. Must be between 1 and 30, inclusive. </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashTransformColumn.Seed"> <summary> Hashing seed </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashTransformColumn.Ordered"> <summary> Whether the position of each term should be included in the hash </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashTransformColumn.InvertHash"> <summary> Limit the number of keys used to generate the slot name to this many. 0 means no invert hashing, -1 means no limit. </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashTransformColumn.OutputKind"> <summary> Output kind: Bag (multi-set vector), Ind (indicator vector), or Key (index) </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.CategoricalHashOneHotVectorizer"> <summary> Encodes the categorical variable with hash-based encoding </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashOneHotVectorizer.Column"> <summary> New column definition(s) (optional form: name:hashBits:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashOneHotVectorizer.HashBits"> <summary> Number of bits to hash into. Must be between 1 and 30, inclusive. </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashOneHotVectorizer.Seed"> <summary> Hashing seed </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashOneHotVectorizer.Ordered"> <summary> Whether the position of each term should be included in the hash </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashOneHotVectorizer.InvertHash"> <summary> Limit the number of keys used to generate the slot name to this many. 0 means no invert hashing, -1 means no limit. </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashOneHotVectorizer.OutputKind"> <summary> Output kind: Bag (multi-set vector), Ind (indicator vector), or Key (index) </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashOneHotVectorizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashOneHotVectorizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalHashOneHotVectorizer.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalTransformColumn.OutputKind"> <summary> Output kind: Bag (multi-set vector), Ind (indicator vector), Key (index), or Binary encoded indicator vector </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalTransformColumn.MaxNumTerms"> <summary> Maximum number of terms to keep when auto-training </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalTransformColumn.Term"> <summary> List of terms </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalTransformColumn.Sort"> <summary> How items should be ordered when vectorized. By default, they will be in the order encountered. If by value items are sorted according to their default comparison, e.g., text sorting will be case sensitive (e.g., 'A' then 'Z' then 'a'). </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalTransformColumn.TextKeyValues"> <summary> Whether key value metadata should be text, regardless of the actual input type </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.CategoricalOneHotVectorizer"> <summary> Encodes the categorical variable with one-hot encoding based on term dictionary </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalOneHotVectorizer.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalOneHotVectorizer.OutputKind"> <summary> Output kind: Bag (multi-set vector), Ind (indicator vector), or Key (index) </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalOneHotVectorizer.MaxNumTerms"> <summary> Maximum number of terms to keep per column when auto-training </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalOneHotVectorizer.Term"> <summary> List of terms </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalOneHotVectorizer.Sort"> <summary> How items should be ordered when vectorized. By default, they will be in the order encountered. If by value items are sorted according to their default comparison, e.g., text sorting will be case sensitive (e.g., 'A' then 'Z' then 'a'). </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalOneHotVectorizer.TextKeyValues"> <summary> Whether key value metadata should be text, regardless of the actual input type </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalOneHotVectorizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalOneHotVectorizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.CategoricalOneHotVectorizer.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.CharTokenizeTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.CharTokenizeTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.CharacterTokenizer"> <summary> Character-oriented tokenizer where text is considered a sequence of characters. </summary> </member> <member name="P:Microsoft.ML.Transforms.CharacterTokenizer.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.CharacterTokenizer.UseMarkerChars"> <summary> Whether to mark the beginning/end of each row/slot with start of text character (0x02)/end of text character (0x03) </summary> </member> <member name="P:Microsoft.ML.Transforms.CharacterTokenizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.CharacterTokenizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.CharacterTokenizer.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.ConcatTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.ConcatTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.ColumnConcatenator"> <summary> Concatenates two columns of the same item type. </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnConcatenator.Column"> <summary> New column definition(s) (optional form: name:srcs) </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnConcatenator.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnConcatenator.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnConcatenator.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.CopyColumnsTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.CopyColumnsTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.ColumnCopier"> <summary> Duplicates columns from the dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnCopier.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnCopier.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnCopier.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnCopier.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.ColumnDropper"> <summary> Drops columns from the dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnDropper.Column"> <summary> Column name to drop </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnDropper.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnDropper.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnDropper.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.ColumnSelector"> <summary> Selects a set of columns, dropping all others </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnSelector.Column"> <summary> Column name to keep </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnSelector.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnSelector.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnSelector.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.ConvertTransformColumn.ResultType"> <summary> The result type </summary> </member> <member name="P:Microsoft.ML.Transforms.ConvertTransformColumn.Range"> <summary> For a key column, this defines the range of values </summary> </member> <member name="P:Microsoft.ML.Transforms.ConvertTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.ConvertTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.ColumnTypeConverter"> <summary> Converts a column to a different type, using standard conversions. </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnTypeConverter.Column"> <summary> New column definition(s) (optional form: name:type:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnTypeConverter.ResultType"> <summary> The result type </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnTypeConverter.Range"> <summary> For a key column, this defines the range of values </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnTypeConverter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnTypeConverter.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ColumnTypeConverter.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.CombinerByContiguousGroupId"> <summary> Groups values of a scalar column into a vector, by a contiguous group ID </summary> </member> <member name="P:Microsoft.ML.Transforms.CombinerByContiguousGroupId.GroupKey"> <summary> Columns to group by </summary> </member> <member name="P:Microsoft.ML.Transforms.CombinerByContiguousGroupId.Column"> <summary> Columns to group together </summary> </member> <member name="P:Microsoft.ML.Transforms.CombinerByContiguousGroupId.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.CombinerByContiguousGroupId.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.CombinerByContiguousGroupId.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformAffineColumn.FixZero"> <summary> Whether to map zero to zero, preserving sparsity </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformAffineColumn.MaxTrainingExamples"> <summary> Max number of examples used to train the normalizer </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformAffineColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformAffineColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.ConditionalNormalizer"> <summary> Normalize the columns only if needed </summary> </member> <member name="P:Microsoft.ML.Transforms.ConditionalNormalizer.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.ConditionalNormalizer.FixZero"> <summary> Whether to map zero to zero, preserving sparsity </summary> </member> <member name="P:Microsoft.ML.Transforms.ConditionalNormalizer.MaxTrainingExamples"> <summary> Max number of examples used to train the normalizer </summary> </member> <member name="P:Microsoft.ML.Transforms.ConditionalNormalizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ConditionalNormalizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ConditionalNormalizer.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.DataCache"> <summary> Caches using the specified cache option. </summary> </member> <member name="P:Microsoft.ML.Transforms.DataCache.Caching"> <summary> Caching strategy </summary> </member> <member name="P:Microsoft.ML.Transforms.DataCache.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.DataCache.Output.OutputData"> <summary> Dataset </summary> </member> <member name="T:Microsoft.ML.Transforms.DatasetScorer"> <summary> Score a dataset with a predictor model </summary> </member> <member name="P:Microsoft.ML.Transforms.DatasetScorer.Data"> <summary> The dataset to be scored </summary> </member> <member name="P:Microsoft.ML.Transforms.DatasetScorer.PredictorModel"> <summary> The predictor model to apply to data </summary> </member> <member name="P:Microsoft.ML.Transforms.DatasetScorer.Suffix"> <summary> Suffix to append to the score columns </summary> </member> <member name="P:Microsoft.ML.Transforms.DatasetScorer.Output.ScoredData"> <summary> The scored dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.DatasetScorer.Output.ScoringTransform"> <summary> The scoring transform </summary> </member> <member name="T:Microsoft.ML.Transforms.DatasetTransformScorer"> <summary> Score a dataset with a transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.DatasetTransformScorer.Data"> <summary> The dataset to be scored </summary> </member> <member name="P:Microsoft.ML.Transforms.DatasetTransformScorer.TransformModel"> <summary> The transform model to apply to data </summary> </member> <member name="P:Microsoft.ML.Transforms.DatasetTransformScorer.Output.ScoredData"> <summary> The scored dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.DatasetTransformScorer.Output.ScoringTransform"> <summary> The scoring transform </summary> </member> <member name="P:Microsoft.ML.Transforms.TermTransformColumn.MaxNumTerms"> <summary> Maximum number of terms to keep when auto-training </summary> </member> <member name="P:Microsoft.ML.Transforms.TermTransformColumn.Term"> <summary> List of terms </summary> </member> <member name="P:Microsoft.ML.Transforms.TermTransformColumn.Sort"> <summary> How items should be ordered when vectorized. By default, they will be in the order encountered. If by value items are sorted according to their default comparison, e.g., text sorting will be case sensitive (e.g., 'A' then 'Z' then 'a'). </summary> </member> <member name="P:Microsoft.ML.Transforms.TermTransformColumn.TextKeyValues"> <summary> Whether key value metadata should be text, regardless of the actual input type </summary> </member> <member name="P:Microsoft.ML.Transforms.TermTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.TermTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.Dictionarizer"> <summary> Converts input values (words, numbers, etc.) to index in a dictionary. </summary> </member> <member name="P:Microsoft.ML.Transforms.Dictionarizer.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.Dictionarizer.MaxNumTerms"> <summary> Maximum number of terms to keep per column when auto-training </summary> </member> <member name="P:Microsoft.ML.Transforms.Dictionarizer.Term"> <summary> List of terms </summary> </member> <member name="P:Microsoft.ML.Transforms.Dictionarizer.Sort"> <summary> How items should be ordered when vectorized. By default, they will be in the order encountered. If by value items are sorted according to their default comparison, e.g., text sorting will be case sensitive (e.g., 'A' then 'Z' then 'a'). </summary> </member> <member name="P:Microsoft.ML.Transforms.Dictionarizer.TextKeyValues"> <summary> Whether key value metadata should be text, regardless of the actual input type </summary> </member> <member name="P:Microsoft.ML.Transforms.Dictionarizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.Dictionarizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.Dictionarizer.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.FeatureCombiner"> <summary> Combines all the features into one feature column. </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureCombiner.Features"> <summary> Features </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureCombiner.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureCombiner.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureCombiner.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.FeatureSelectorByCount"> <summary> Selects the slots for which the count of non-default values is greater than or equal to a threshold. </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByCount.Column"> <summary> Columns to use for feature selection </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByCount.Count"> <summary> If the count of non-default values for a slot is greater than or equal to this threshold, the slot is preserved </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByCount.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByCount.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByCount.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.FeatureSelectorByMutualInformation"> <summary> Selects the top k slots across all specified columns ordered by their mutual information with the label column. </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByMutualInformation.Column"> <summary> Columns to use for feature selection </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByMutualInformation.LabelColumn"> <summary> Column to use for labels </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByMutualInformation.SlotsInOutput"> <summary> The maximum number of slots to preserve in output </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByMutualInformation.NumBins"> <summary> Max number of bins for R4/R8 columns, power of 2 recommended </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByMutualInformation.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByMutualInformation.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.FeatureSelectorByMutualInformation.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormNormalizerTransformGcnColumn.UseStdDev"> <summary> Normalize by standard deviation rather than L2 norm </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormNormalizerTransformGcnColumn.Scale"> <summary> Scale features by this value </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormNormalizerTransformGcnColumn.SubMean"> <summary> Subtract mean from each value before normalizing </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormNormalizerTransformGcnColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormNormalizerTransformGcnColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.GlobalContrastNormalizer"> <summary> Performs a global contrast normalization on input values: Y = (s * X - M) / D, where s is a scale, M is mean and D is either L2 norm or standard deviation. </summary> </member> <member name="P:Microsoft.ML.Transforms.GlobalContrastNormalizer.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.GlobalContrastNormalizer.SubMean"> <summary> Subtract mean from each value before normalizing </summary> </member> <member name="P:Microsoft.ML.Transforms.GlobalContrastNormalizer.UseStdDev"> <summary> Normalize by standard deviation rather than L2 norm </summary> </member> <member name="P:Microsoft.ML.Transforms.GlobalContrastNormalizer.Scale"> <summary> Scale features by this value </summary> </member> <member name="P:Microsoft.ML.Transforms.GlobalContrastNormalizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.GlobalContrastNormalizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.GlobalContrastNormalizer.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.HashJoinTransformColumn.Join"> <summary> Whether the values need to be combined for a single hash </summary> </member> <member name="P:Microsoft.ML.Transforms.HashJoinTransformColumn.CustomSlotMap"> <summary> Which slots should be combined together. Example: 0,3,5;0,1;3;2,1,0. Overrides 'join'. </summary> </member> <member name="P:Microsoft.ML.Transforms.HashJoinTransformColumn.HashBits"> <summary> Number of bits to hash into. Must be between 1 and 31, inclusive. </summary> </member> <member name="P:Microsoft.ML.Transforms.HashJoinTransformColumn.Seed"> <summary> Hashing seed </summary> </member> <member name="P:Microsoft.ML.Transforms.HashJoinTransformColumn.Ordered"> <summary> Whether the position of each term should be included in the hash </summary> </member> <member name="P:Microsoft.ML.Transforms.HashJoinTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.HashJoinTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.HashConverter"> <summary> Converts column values into hashes. This transform accepts both numeric and text inputs, both single and vector-valued columns. This is a part of the Dracula transform. </summary> </member> <member name="P:Microsoft.ML.Transforms.HashConverter.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.HashConverter.Join"> <summary> Whether the values need to be combined for a single hash </summary> </member> <member name="P:Microsoft.ML.Transforms.HashConverter.HashBits"> <summary> Number of bits to hash into. Must be between 1 and 31, inclusive. </summary> </member> <member name="P:Microsoft.ML.Transforms.HashConverter.Seed"> <summary> Hashing seed </summary> </member> <member name="P:Microsoft.ML.Transforms.HashConverter.Ordered"> <summary> Whether the position of each term should be included in the hash </summary> </member> <member name="P:Microsoft.ML.Transforms.HashConverter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.HashConverter.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.HashConverter.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.KeyToValueTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.KeyToValueTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.KeyToTextConverter"> <summary> KeyToValueTransform utilizes KeyValues metadata to map key indices to the corresponding values in the KeyValues metadata. </summary> </member> <member name="P:Microsoft.ML.Transforms.KeyToTextConverter.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.KeyToTextConverter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.KeyToTextConverter.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.KeyToTextConverter.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.LabelColumnKeyBooleanConverter"> <summary> Transforms the label to either key or bool (if needed) to make it suitable for classification. </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelColumnKeyBooleanConverter.TextKeyValues"> <summary> Convert the key values to text </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelColumnKeyBooleanConverter.LabelColumn"> <summary> The label column </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelColumnKeyBooleanConverter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelColumnKeyBooleanConverter.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelColumnKeyBooleanConverter.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelIndicatorTransformColumn.ClassIndex"> <summary> The positive example class for binary classification. </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelIndicatorTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelIndicatorTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.LabelIndicator"> <summary> Label remapper used by OVA </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelIndicator.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelIndicator.ClassIndex"> <summary> Label of the positive class. </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelIndicator.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelIndicator.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelIndicator.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.LabelToFloatConverter"> <summary> Transforms the label to float to make it suitable for regression. </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelToFloatConverter.LabelColumn"> <summary> The label column </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelToFloatConverter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelToFloatConverter.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.LabelToFloatConverter.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformLogNormalColumn.MaxTrainingExamples"> <summary> Max number of examples used to train the normalizer </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformLogNormalColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.NormalizeTransformLogNormalColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.LogMeanVarianceNormalizer"> <summary> Normalizes the data based on the computed mean and variance of the logarithm of the data. </summary> </member> <member name="P:Microsoft.ML.Transforms.LogMeanVarianceNormalizer.UseCdf"> <summary> Whether to use CDF as the output </summary> </member> <member name="P:Microsoft.ML.Transforms.LogMeanVarianceNormalizer.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.LogMeanVarianceNormalizer.MaxTrainingExamples"> <summary> Max number of examples used to train the normalizer </summary> </member> <member name="P:Microsoft.ML.Transforms.LogMeanVarianceNormalizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.LogMeanVarianceNormalizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.LogMeanVarianceNormalizer.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormNormalizerTransformColumn.NormKind"> <summary> The norm to use to normalize each sample </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormNormalizerTransformColumn.SubMean"> <summary> Subtract mean from each value before normalizing </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormNormalizerTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormNormalizerTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.LpNormalizer"> <summary> Normalize vectors (rows) individually by rescaling them to unit norm (L2, L1 or LInf). Performs the following operation on a vector X: Y = (X - M) / D, where M is mean and D is either L2 norm, L1 norm or LInf norm. </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormalizer.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormalizer.NormKind"> <summary> The norm to use to normalize each sample </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormalizer.SubMean"> <summary> Subtract mean from each value before normalizing </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormalizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormalizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.LpNormalizer.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.ManyHeterogeneousModelCombiner"> <summary> Combines a sequence of TransformModels and a PredictorModel into a single PredictorModel. </summary> </member> <member name="P:Microsoft.ML.Transforms.ManyHeterogeneousModelCombiner.TransformModels"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.ManyHeterogeneousModelCombiner.PredictorModel"> <summary> Predictor model </summary> </member> <member name="P:Microsoft.ML.Transforms.ManyHeterogeneousModelCombiner.Output.PredictorModel"> <summary> Predictor model </summary> </member> <member name="T:Microsoft.ML.Transforms.MeanVarianceNormalizer"> <summary> Normalizes the data based on the computed mean and variance of the data. </summary> </member> <member name="P:Microsoft.ML.Transforms.MeanVarianceNormalizer.UseCdf"> <summary> Whether to use CDF as the output </summary> </member> <member name="P:Microsoft.ML.Transforms.MeanVarianceNormalizer.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.MeanVarianceNormalizer.FixZero"> <summary> Whether to map zero to zero, preserving sparsity </summary> </member> <member name="P:Microsoft.ML.Transforms.MeanVarianceNormalizer.MaxTrainingExamples"> <summary> Max number of examples used to train the normalizer </summary> </member> <member name="P:Microsoft.ML.Transforms.MeanVarianceNormalizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MeanVarianceNormalizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MeanVarianceNormalizer.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.MinMaxNormalizer"> <summary> Normalizes the data based on the observed minimum and maximum values of the data. </summary> </member> <member name="P:Microsoft.ML.Transforms.MinMaxNormalizer.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.MinMaxNormalizer.FixZero"> <summary> Whether to map zero to zero, preserving sparsity </summary> </member> <member name="P:Microsoft.ML.Transforms.MinMaxNormalizer.MaxTrainingExamples"> <summary> Max number of examples used to train the normalizer </summary> </member> <member name="P:Microsoft.ML.Transforms.MinMaxNormalizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MinMaxNormalizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MinMaxNormalizer.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.NAHandleTransformColumn.Kind"> <summary> The replacement method to utilize </summary> </member> <member name="P:Microsoft.ML.Transforms.NAHandleTransformColumn.ImputeBySlot"> <summary> Whether to impute values by slot </summary> </member> <member name="P:Microsoft.ML.Transforms.NAHandleTransformColumn.ConcatIndicator"> <summary> Whether or not to concatenate an indicator vector column to the value column </summary> </member> <member name="P:Microsoft.ML.Transforms.NAHandleTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.NAHandleTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.MissingValueHandler"> <summary> Handle missing values by replacing them with either the default value or the mean/min/max value (for non-text columns only). An indicator column can optionally be concatenated, if theinput column type is numeric. </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueHandler.Column"> <summary> New column definition(s) (optional form: name:rep:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueHandler.ReplaceWith"> <summary> The replacement method to utilize </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueHandler.ImputeBySlot"> <summary> Whether to impute values by slot </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueHandler.Concat"> <summary> Whether or not to concatenate an indicator vector column to the value column </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueHandler.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueHandler.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueHandler.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.NAIndicatorTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.NAIndicatorTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.MissingValueIndicator"> <summary> Create a boolean output column with the same number of slots as the input column, where the output value is true if the value in the input column is missing. </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueIndicator.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueIndicator.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueIndicator.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueIndicator.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.NADropTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.NADropTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.MissingValuesDropper"> <summary> Removes NAs from vector columns. </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValuesDropper.Column"> <summary> Columns to drop the NAs for </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValuesDropper.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValuesDropper.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValuesDropper.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.MissingValuesRowDropper"> <summary> Filters out rows that contain missing values. </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValuesRowDropper.Column"> <summary> Column </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValuesRowDropper.Complement"> <summary> If true, keep only rows that contain NA values, and filter the rest. </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValuesRowDropper.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValuesRowDropper.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValuesRowDropper.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.NAReplaceTransformColumn.ReplacementString"> <summary> Replacement value for NAs (uses default value if not given) </summary> </member> <member name="P:Microsoft.ML.Transforms.NAReplaceTransformColumn.Kind"> <summary> The replacement method to utilize </summary> </member> <member name="P:Microsoft.ML.Transforms.NAReplaceTransformColumn.Slot"> <summary> Whether to impute values by slot </summary> </member> <member name="P:Microsoft.ML.Transforms.NAReplaceTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.NAReplaceTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.MissingValueSubstitutor"> <summary> Create an output column of the same type and size of the input column, where missing values are replaced with either the default value or the mean/min/max value (for non-text columns only). </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueSubstitutor.Column"> <summary> New column definition(s) (optional form: name:rep:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueSubstitutor.ReplacementKind"> <summary> The replacement method to utilize </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueSubstitutor.ImputeBySlot"> <summary> Whether to impute values by slot </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueSubstitutor.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueSubstitutor.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.MissingValueSubstitutor.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.ModelCombiner"> <summary> Combines a sequence of TransformModels into a single model </summary> </member> <member name="P:Microsoft.ML.Transforms.ModelCombiner.Models"> <summary> Input models </summary> </member> <member name="P:Microsoft.ML.Transforms.ModelCombiner.Output.OutputModel"> <summary> Combined model </summary> </member> <member name="P:Microsoft.ML.Transforms.NgramTransformColumn.NgramLength"> <summary> Maximum ngram length </summary> </member> <member name="P:Microsoft.ML.Transforms.NgramTransformColumn.AllLengths"> <summary> Whether to include all ngram lengths up to NgramLength or only NgramLength </summary> </member> <member name="P:Microsoft.ML.Transforms.NgramTransformColumn.SkipLength"> <summary> Maximum number of tokens to skip when constructing an ngram </summary> </member> <member name="P:Microsoft.ML.Transforms.NgramTransformColumn.MaxNumTerms"> <summary> Maximum number of ngrams to store in the dictionary </summary> </member> <member name="P:Microsoft.ML.Transforms.NgramTransformColumn.Weighting"> <summary> Statistical measure used to evaluate how important a word is to a document in a corpus </summary> </member> <member name="P:Microsoft.ML.Transforms.NgramTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.NgramTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.NGramTranslator"> <summary> Produces a bag of counts of ngrams (sequences of consecutive values of length 1-n) in a given vector of keys. It does so by building a dictionary of ngrams and using the id in the dictionary as the index in the bag. </summary> </member> <member name="P:Microsoft.ML.Transforms.NGramTranslator.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.NGramTranslator.NgramLength"> <summary> Maximum ngram length </summary> </member> <member name="P:Microsoft.ML.Transforms.NGramTranslator.AllLengths"> <summary> Whether to store all ngram lengths up to ngramLength, or only ngramLength </summary> </member> <member name="P:Microsoft.ML.Transforms.NGramTranslator.SkipLength"> <summary> Maximum number of tokens to skip when constructing an ngram </summary> </member> <member name="P:Microsoft.ML.Transforms.NGramTranslator.MaxNumTerms"> <summary> Maximum number of ngrams to store in the dictionary </summary> </member> <member name="P:Microsoft.ML.Transforms.NGramTranslator.Weighting"> <summary> The weighting criteria </summary> </member> <member name="P:Microsoft.ML.Transforms.NGramTranslator.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.NGramTranslator.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.NGramTranslator.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.NoOperation"> <summary> Does nothing. </summary> </member> <member name="P:Microsoft.ML.Transforms.NoOperation.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.NoOperation.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.NoOperation.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.OptionalColumnCreator"> <summary> If the source column does not exist after deserialization, create a column with the right type and default values. </summary> </member> <member name="P:Microsoft.ML.Transforms.OptionalColumnCreator.Column"> <summary> New column definition(s) </summary> </member> <member name="P:Microsoft.ML.Transforms.OptionalColumnCreator.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.OptionalColumnCreator.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.OptionalColumnCreator.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.PredictedLabelColumnOriginalValueConverter"> <summary> Transforms a predicted label column to its original values, unless it is of type bool. </summary> </member> <member name="P:Microsoft.ML.Transforms.PredictedLabelColumnOriginalValueConverter.PredictedLabelColumn"> <summary> The predicted label column </summary> </member> <member name="P:Microsoft.ML.Transforms.PredictedLabelColumnOriginalValueConverter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.PredictedLabelColumnOriginalValueConverter.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.PredictedLabelColumnOriginalValueConverter.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.GenerateNumberTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.GenerateNumberTransformColumn.UseCounter"> <summary> Use an auto-incremented integer starting at zero instead of a random number </summary> </member> <member name="P:Microsoft.ML.Transforms.GenerateNumberTransformColumn.Seed"> <summary> The random seed </summary> </member> <member name="T:Microsoft.ML.Transforms.RandomNumberGenerator"> <summary> Adds a column with a generated number sequence. </summary> </member> <member name="P:Microsoft.ML.Transforms.RandomNumberGenerator.Column"> <summary> New column definition(s) (optional form: name:seed) </summary> </member> <member name="P:Microsoft.ML.Transforms.RandomNumberGenerator.UseCounter"> <summary> Use an auto-incremented integer starting at zero instead of a random number </summary> </member> <member name="P:Microsoft.ML.Transforms.RandomNumberGenerator.Seed"> <summary> The random seed </summary> </member> <member name="P:Microsoft.ML.Transforms.RandomNumberGenerator.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.RandomNumberGenerator.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.RandomNumberGenerator.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.RowRangeFilter"> <summary> Filters a dataview on a column of type Single, Double or Key (contiguous). Keeps the values that are in the specified min/max range. NaNs are always filtered out. If the input is a Key type, the min/max are considered percentages of the number of values. </summary> </member> <member name="P:Microsoft.ML.Transforms.RowRangeFilter.Column"> <summary> Column </summary> </member> <member name="P:Microsoft.ML.Transforms.RowRangeFilter.Min"> <summary> Minimum value (0 to 1 for key types) </summary> </member> <member name="P:Microsoft.ML.Transforms.RowRangeFilter.Max"> <summary> Maximum value (0 to 1 for key types) </summary> </member> <member name="P:Microsoft.ML.Transforms.RowRangeFilter.Complement"> <summary> If true, keep the values that fall outside the range. </summary> </member> <member name="P:Microsoft.ML.Transforms.RowRangeFilter.IncludeMin"> <summary> If true, include in the range the values that are equal to min. </summary> </member> <member name="P:Microsoft.ML.Transforms.RowRangeFilter.IncludeMax"> <summary> If true, include in the range the values that are equal to max. </summary> </member> <member name="P:Microsoft.ML.Transforms.RowRangeFilter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.RowRangeFilter.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.RowRangeFilter.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.RowSkipAndTakeFilter"> <summary> Allows limiting input to a subset of rows at an optional offset. Can be used to implement data paging. </summary> </member> <member name="P:Microsoft.ML.Transforms.RowSkipAndTakeFilter.Skip"> <summary> Number of items to skip </summary> </member> <member name="P:Microsoft.ML.Transforms.RowSkipAndTakeFilter.Take"> <summary> Number of items to take </summary> </member> <member name="P:Microsoft.ML.Transforms.RowSkipAndTakeFilter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.RowSkipAndTakeFilter.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.RowSkipAndTakeFilter.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.RowSkipFilter"> <summary> Allows limiting input to a subset of rows by skipping a number of rows. </summary> </member> <member name="P:Microsoft.ML.Transforms.RowSkipFilter.Count"> <summary> Number of items to skip </summary> </member> <member name="P:Microsoft.ML.Transforms.RowSkipFilter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.RowSkipFilter.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.RowSkipFilter.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.RowTakeFilter"> <summary> Allows limiting input to a subset of rows by taking N first rows. </summary> </member> <member name="P:Microsoft.ML.Transforms.RowTakeFilter.Count"> <summary> Number of items to take </summary> </member> <member name="P:Microsoft.ML.Transforms.RowTakeFilter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.RowTakeFilter.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.RowTakeFilter.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.ScoreColumnSelector"> <summary> Selects only the last score columns and the extra columns specified in the arguments. </summary> </member> <member name="P:Microsoft.ML.Transforms.ScoreColumnSelector.ExtraColumns"> <summary> Extra columns to write </summary> </member> <member name="P:Microsoft.ML.Transforms.ScoreColumnSelector.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ScoreColumnSelector.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.ScoreColumnSelector.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.Scorer"> <summary> Turn the predictor model into a transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.Scorer.PredictorModel"> <summary> The predictor model to turn into a transform </summary> </member> <member name="P:Microsoft.ML.Transforms.Scorer.Output.ScoredData"> <summary> The scored dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.Scorer.Output.ScoringTransform"> <summary> The scoring transform </summary> </member> <member name="T:Microsoft.ML.Transforms.Segregator"> <summary> Un-groups vector columns into sequences of rows, inverse of Group transform </summary> </member> <member name="P:Microsoft.ML.Transforms.Segregator.Column"> <summary> Columns to unroll, or 'pivot' </summary> </member> <member name="P:Microsoft.ML.Transforms.Segregator.Mode"> <summary> Specifies how to unroll multiple pivot columns of different size. </summary> </member> <member name="P:Microsoft.ML.Transforms.Segregator.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.Segregator.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.Segregator.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.SentimentAnalyzer"> <summary> Uses a pretrained sentiment model to score input strings </summary> </member> <member name="P:Microsoft.ML.Transforms.SentimentAnalyzer.Source"> <summary> Name of the source column. </summary> </member> <member name="P:Microsoft.ML.Transforms.SentimentAnalyzer.Name"> <summary> Name of the new column. </summary> </member> <member name="P:Microsoft.ML.Transforms.SentimentAnalyzer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.SentimentAnalyzer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.SentimentAnalyzer.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.SupervisedBinNormalizer"> <summary> Similar to BinNormalizer, but calculates bins based on correlation with the label column, not equi-density. The new value is bin_number / number_of_bins. </summary> </member> <member name="P:Microsoft.ML.Transforms.SupervisedBinNormalizer.LabelColumn"> <summary> Label column for supervised binning </summary> </member> <member name="P:Microsoft.ML.Transforms.SupervisedBinNormalizer.MinBinSize"> <summary> Minimum number of examples per bin </summary> </member> <member name="P:Microsoft.ML.Transforms.SupervisedBinNormalizer.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.SupervisedBinNormalizer.NumBins"> <summary> Max number of bins, power of 2 recommended </summary> </member> <member name="P:Microsoft.ML.Transforms.SupervisedBinNormalizer.FixZero"> <summary> Whether to map zero to zero, preserving sparsity </summary> </member> <member name="P:Microsoft.ML.Transforms.SupervisedBinNormalizer.MaxTrainingExamples"> <summary> Max number of examples used to train the normalizer </summary> </member> <member name="P:Microsoft.ML.Transforms.SupervisedBinNormalizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.SupervisedBinNormalizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.SupervisedBinNormalizer.Output.Model"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.TextTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.TextTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="P:Microsoft.ML.Transforms.TermLoaderArguments.Term"> <summary> List of terms </summary> </member> <member name="P:Microsoft.ML.Transforms.TermLoaderArguments.Sort"> <summary> How items should be ordered when vectorized. By default, they will be in the order encountered. If by value items are sorted according to their default comparison, e.g., text sorting will be case sensitive (e.g., 'A' then 'Z' then 'a'). </summary> </member> <member name="P:Microsoft.ML.Transforms.TermLoaderArguments.DropUnknowns"> <summary> Drop unknown terms instead of mapping them to NA term. </summary> </member> <member name="T:Microsoft.ML.Transforms.TextFeaturizer"> <summary> A transform that turns a collection of text documents into numerical feature vectors. The feature vectors are normalized counts of (word and/or character) ngrams in a given tokenized text. </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.Column"> <summary> New column definition (optional form: name:srcs). </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.Language"> <summary> Dataset language or 'AutoDetect' to detect language per row. </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.StopWordsRemover"> <summary> Stopwords remover. </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.TextCase"> <summary> Casing text using the rules of the invariant culture. </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.KeepDiacritics"> <summary> Whether to keep diacritical marks or remove them. </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.KeepPunctuations"> <summary> Whether to keep punctuation marks or remove them. </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.KeepNumbers"> <summary> Whether to keep numbers or remove them. </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.OutputTokens"> <summary> Whether to output the transformed text tokens as an additional column. </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.Dictionary"> <summary> A dictionary of whitelisted terms. </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.WordFeatureExtractor"> <summary> Ngram feature extractor to use for words (WordBag/WordHashBag). </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.CharFeatureExtractor"> <summary> Ngram feature extractor to use for characters (WordBag/WordHashBag). </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.VectorNormalizer"> <summary> Normalize vectors (rows) individually by rescaling them to unit norm. </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.TextFeaturizer.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.TextToKeyConverter"> <summary> Converts input values (words, numbers, etc.) to index in a dictionary. </summary> </member> <member name="P:Microsoft.ML.Transforms.TextToKeyConverter.Column"> <summary> New column definition(s) (optional form: name:src) </summary> </member> <member name="P:Microsoft.ML.Transforms.TextToKeyConverter.MaxNumTerms"> <summary> Maximum number of terms to keep per column when auto-training </summary> </member> <member name="P:Microsoft.ML.Transforms.TextToKeyConverter.Term"> <summary> List of terms </summary> </member> <member name="P:Microsoft.ML.Transforms.TextToKeyConverter.Sort"> <summary> How items should be ordered when vectorized. By default, they will be in the order encountered. If by value items are sorted according to their default comparison, e.g., text sorting will be case sensitive (e.g., 'A' then 'Z' then 'a'). </summary> </member> <member name="P:Microsoft.ML.Transforms.TextToKeyConverter.TextKeyValues"> <summary> Whether key value metadata should be text, regardless of the actual input type </summary> </member> <member name="P:Microsoft.ML.Transforms.TextToKeyConverter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.TextToKeyConverter.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.TextToKeyConverter.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.TrainTestDatasetSplitter"> <summary> Split the dataset into train and test sets </summary> </member> <member name="P:Microsoft.ML.Transforms.TrainTestDatasetSplitter.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.TrainTestDatasetSplitter.Fraction"> <summary> Fraction of training data </summary> </member> <member name="P:Microsoft.ML.Transforms.TrainTestDatasetSplitter.StratificationColumn"> <summary> Stratification column </summary> </member> <member name="P:Microsoft.ML.Transforms.TrainTestDatasetSplitter.Output.TrainData"> <summary> Training data </summary> </member> <member name="P:Microsoft.ML.Transforms.TrainTestDatasetSplitter.Output.TestData"> <summary> Testing data </summary> </member> <member name="T:Microsoft.ML.Transforms.TreeLeafFeaturizer"> <summary> Trains a tree ensemble, or loads it from a file, then maps a numeric feature vector to three outputs: 1. A vector containing the individual tree outputs of the tree ensemble. 2. A vector indicating the leaves that the feature vector falls on in the tree ensemble. 3. A vector indicating the paths that the feature vector falls on in the tree ensemble. If a both a model file and a trainer are specified - will use the model file. If neither are specified, will train a default FastTree model. This can handle key labels by training a regression model towards their optionally permuted indices. </summary> </member> <member name="P:Microsoft.ML.Transforms.TreeLeafFeaturizer.Suffix"> <summary> Output column: The suffix to append to the default column names </summary> </member> <member name="P:Microsoft.ML.Transforms.TreeLeafFeaturizer.LabelPermutationSeed"> <summary> If specified, determines the permutation seed for applying this featurizer to a multiclass problem. </summary> </member> <member name="P:Microsoft.ML.Transforms.TreeLeafFeaturizer.PredictorModel"> <summary> Trainer to use </summary> </member> <member name="P:Microsoft.ML.Transforms.TreeLeafFeaturizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.TreeLeafFeaturizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.TreeLeafFeaturizer.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.Transforms.TwoHeterogeneousModelCombiner"> <summary> Combines a TransformModel and a PredictorModel into a single PredictorModel. </summary> </member> <member name="P:Microsoft.ML.Transforms.TwoHeterogeneousModelCombiner.TransformModel"> <summary> Transform model </summary> </member> <member name="P:Microsoft.ML.Transforms.TwoHeterogeneousModelCombiner.PredictorModel"> <summary> Predictor model </summary> </member> <member name="P:Microsoft.ML.Transforms.TwoHeterogeneousModelCombiner.Output.PredictorModel"> <summary> Predictor model </summary> </member> <member name="P:Microsoft.ML.Transforms.DelimitedTokenizeTransformColumn.TermSeparators"> <summary> Comma separated set of term separator(s). Commonly: 'space', 'comma', 'semicolon' or other single character. </summary> </member> <member name="P:Microsoft.ML.Transforms.DelimitedTokenizeTransformColumn.Name"> <summary> Name of the new column </summary> </member> <member name="P:Microsoft.ML.Transforms.DelimitedTokenizeTransformColumn.Source"> <summary> Name of the source column </summary> </member> <member name="T:Microsoft.ML.Transforms.WordTokenizer"> <summary> The input to this transform is text, and the output is a vector of text containing the words (tokens) in the original text. The separator is space, but can be specified as any other character (or multiple characters) if needed. </summary> </member> <member name="P:Microsoft.ML.Transforms.WordTokenizer.Column"> <summary> New column definition(s) </summary> </member> <member name="P:Microsoft.ML.Transforms.WordTokenizer.TermSeparators"> <summary> Comma separated set of term separator(s). Commonly: 'space', 'comma', 'semicolon' or other single character. </summary> </member> <member name="P:Microsoft.ML.Transforms.WordTokenizer.Data"> <summary> Input dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.WordTokenizer.Output.OutputData"> <summary> Transformed dataset </summary> </member> <member name="P:Microsoft.ML.Transforms.WordTokenizer.Output.Model"> <summary> Transform model </summary> </member> <member name="T:Microsoft.ML.ILearningPipelineItem"> <summary> An item that can be added to the Learning Pipeline. </summary> </member> <member name="T:Microsoft.ML.ILearningPipelineLoader"> <summary> A data loader that can be added to the Learning Pipeline. </summary> </member> <member name="T:Microsoft.ML.ILearningPipelineStep"> <summary> An item that can be added to the Learning Pipeline that can be trained and or return a IDataView. This encapsulates an IDataView(input) and ITranformModel(output) object for a transform and for a learner it will encapsulate IDataView(input) and IPredictorModel(output). </summary> </member> <member name="M:Microsoft.ML.LearningPipeline.Execute(Microsoft.ML.Runtime.IHostEnvironment)"> <summary> Executes a pipeline and returns the resulting data. </summary> <returns> The IDataView that was returned by the pipeline. </returns> </member> <member name="T:Microsoft.ML.LearningPipelineDebugProxy"> <summary> The debug proxy class for a LearningPipeline. Displays the current columns and values in the debugger Watch window. </summary> </member> <member name="P:Microsoft.ML.LearningPipelineDebugProxy.Columns"> <summary> Gets the column information of the pipeline. </summary> </member> <member name="P:Microsoft.ML.LearningPipelineDebugProxy.Rows"> <summary> Gets the row information of the pipeline. </summary> </member> <member name="M:Microsoft.ML.PredictionModel.ReadAsync(System.String)"> <summary> Read model from file asynchronously. </summary> <param name="path">Path to the file</param> <returns>Model</returns> </member> <member name="M:Microsoft.ML.PredictionModel.ReadAsync(System.IO.Stream)"> <summary> Read model from stream asynchronously. </summary> <param name="stream">Stream with model</param> <returns>Model</returns> </member> <member name="M:Microsoft.ML.PredictionModel.ReadAsync``2(System.String)"> <summary> Read generic model from file. </summary> <typeparam name="TInput">Type for incoming data</typeparam> <typeparam name="TOutput">Type for output data</typeparam> <param name="path">Path to the file</param> <returns>Model</returns> </member> <member name="M:Microsoft.ML.PredictionModel.ReadAsync``2(System.IO.Stream)"> <summary> Read generic model from file. </summary> <typeparam name="TInput">Type for incoming data</typeparam> <typeparam name="TOutput">Type for output data</typeparam> <param name="stream">Stream with model</param> <returns>Model</returns> </member> <member name="M:Microsoft.ML.PredictionModel.Predict(Microsoft.ML.Runtime.Data.IDataView)"> <summary> Run prediction on top of IDataView. </summary> <param name="input">Incoming IDataView</param> <returns>IDataView which contains predictions</returns> </member> <member name="M:Microsoft.ML.PredictionModel.WriteAsync(System.String)"> <summary> Save model to file. </summary> <param name="path">File to save model</param> <returns></returns> </member> <member name="M:Microsoft.ML.PredictionModel.WriteAsync(System.IO.Stream)"> <summary> Save model to stream. </summary> <param name="stream">Stream to save model.</param> <returns></returns> </member> <member name="M:Microsoft.ML.PredictionModel`2.Predict(`0)"> <summary> Run prediction for the TInput data. </summary> <param name="input">Input data</param> <returns>Result of prediction</returns> </member> <member name="M:Microsoft.ML.PredictionModel`2.Predict(System.Collections.Generic.IEnumerable{`0})"> <summary> Run prediction for collection of inputs. </summary> <param name="inputs">Input data</param> <returns>Result of prediction</returns> </member> <member name="M:Microsoft.ML.TextLoader`1.#ctor(System.String,System.Boolean,System.String,System.Boolean,System.Boolean,System.Boolean)"> <summary> Construct a TextLoader object </summary> <param name="inputFilePath">Data file path</param> <param name="useHeader">Does the file contains header?</param> <param name="separator">How the columns are seperated? Options: separator="tab", separator="space", separator="comma" or separator=[single character]. By default separator=null means "tab"</param> <param name="allowQuotedStrings">Whether the input may include quoted values, which can contain separator characters, colons, and distinguish empty values from missing values. When true, consecutive separators denote a missing value and an empty value is denoted by \"\". When false, consecutive separators denote an empty value.</param> <param name="supportSparse">Whether the input may include sparse representations e.g. if one of the row contains "5 2:6 4:3" that's mean there are 5 columns all zero except for 3rd and 5th columns which have values 6 and 3</param> <param name="trimWhitespace">Remove trailing whitespace from lines</param> </member> </members> </doc> |