背景
写这个文章的原因是为了了解一下Starrocks对各个算子的代价是怎么计算的,以便在后续对Starrocks做优化的时候,能够起到辅助作用
本文基于 Starrocks 3.3.5
结论
StatisticsCalculator 通过获取底层数据源的统计信息而进行自上而下的数据信息的统计,这些统计信息的计算大部分也是估算获得的。
分析
直接到 StatisticsCalculator 类中(该类会被DeriveStatsTask调用),这个类采用的典型的Visitor模式,对于不同的算子,会进入到不同的方法中去:
这里我们分析三种算子:
1. Scan olap
2. Filter
3. Projection
其他的算子可以看源码实现。
Scan olap算子
@Override
public Void visitLogicalOlapScan(LogicalOlapScanOperator node, ExpressionContext context) {
return computeOlapScanNode(node, context, node.getTable(), node.getSelectedPartitionId(),
node.getColRefToColumnMetaMap());
}
@Override
public Void visitPhysicalOlapScan(PhysicalOlapScanOperator node, ExpressionContext context) {
return computeOlapScanNode(node, context, node.getTable(), node.getSelectedPartitionId(),
node.getColRefToColumnMetaMap());
}
private Void computeOlapScanNode(Operator node, ExpressionContext context, Table table,
Collection<Long> selectedPartitionIds,
Map<ColumnRefOperator, Column> colRefToColumnMetaMap) {
Preconditions.checkState(context.arity() == 0);
// 1. get table row count
long tableRowCount = StatisticsCalcUtils.getTableRowCount(table, node, optimizerContext);
// 2. get required columns statistics
Statistics.Builder builder = StatisticsCalcUtils.estimateScanColumns(table, colRefToColumnMetaMap, optimizerContext);
if (tableRowCount <= 1) {
builder.setTableRowCountMayInaccurate(true);
}
// 3. deal with column statistics for partition prune
OlapTable olapTable = (OlapTable) table;
adjustPartitionColsStatistic(selectedPartitionIds, olapTable, builder, colRefToColumnMetaMap);
builder.setOutputRowCount(tableRowCount);
if (isRewrittenMvGE(node, table, context)) {
adjustNestedMvStatistics(context.getGroupExpression().getGroup(), (MaterializedView) olapTable, builder);
if (node.getProjection() != null) {
builder.setShadowColumns(node.getProjection().getOutputColumns());
}
}
// 4. estimate cardinality
context.setStatistics(builder.build());
return visitOperator(node, context);
}
这个也是统计信息的来源,其他算子的统计信息都是基于此类算子计算出来的。
StatisticsCalcUtils.getTableRowCount
首先计算行数
该方法会首先获取到扫描的分区数,之后再从CachedStatisticStorage
获取到对应分区的行数信息,从而累加,最小为1行StatisticsCalcUtils.estimateScanColumns
获取到对应列的统计信息
该方法也是从CachedStatisticStorage
获取ColumnStatistics
和HistogramStatistics
信息- 如果行数只有一行,那就标记为统计信息不准确
- 调增分区列的统计信息
- visitOperator 这里会对scan涉及到的谓词以及 limit projection 做进一步的统计分析
对于不同的谓词,利用BaseCalculatingVisitor / LargeOrCalculatingVisitor
进行统计,比如说 in / or / and等
对于projection 利用ExpressionStatisticVisitor
类进行统计
Filter算子
@Override
public Void visitLogicalFilter(LogicalFilterOperator node, ExpressionContext context) {
return computeFilterNode(node, context);
}
@Override
public Void visitPhysicalFilter(PhysicalFilterOperator node, ExpressionContext context) {
return computeFilterNode(node, context);
}
private Void computeFilterNode(Operator node, ExpressionContext context) {
Statistics inputStatistics = context.getChildStatistics(0);
Statistics.Builder builder = Statistics.builder();
builder.addColumnStatistics(inputStatistics.getColumnStatistics());
builder.setOutputRowCount(inputStatistics.getOutputRowCount());
context.setStatistics(builder.build());
return visitOperator(node, context);
}
- 对于filter操作来说 ,也是沿用了子节点的统计信息,所以说filter这一层级统计信息和子节点是一致的
Projection算子
@Override
public Void visitLogicalProject(LogicalProjectOperator node, ExpressionContext context) {
return computeProjectNode(context, node.getColumnRefMap());
}
@Override
public Void visitPhysicalProject(PhysicalProjectOperator node, ExpressionContext context) {
return computeProjectNode(context, node.getColumnRefMap());
}
private Void computeProjectNode(ExpressionContext context, Map<ColumnRefOperator, ScalarOperator> columnRefMap) {
Preconditions.checkState(context.arity() == 1);
Statistics.Builder builder = Statistics.builder();
Statistics inputStatistics = context.getChildStatistics(0);
builder.setOutputRowCount(inputStatistics.getOutputRowCount());
Statistics.Builder allBuilder = Statistics.builder();
allBuilder.setOutputRowCount(inputStatistics.getOutputRowCount());
allBuilder.addColumnStatistics(inputStatistics.getColumnStatistics());
for (ColumnRefOperator requiredColumnRefOperator : columnRefMap.keySet()) {
ScalarOperator mapOperator = columnRefMap.get(requiredColumnRefOperator);
if (mapOperator instanceof SubfieldOperator && context.getOptExpression() != null) {
Operator child = context.getOptExpression().inputAt(0).getOp();
if (child instanceof LogicalScanOperator || child instanceof PhysicalScanOperator) {
addSubFiledStatistics(child, ImmutableMap.of(requiredColumnRefOperator,
(SubfieldOperator) mapOperator), builder);
continue;
}
}
ColumnStatistic outputStatistic =
ExpressionStatisticCalculator.calculate(mapOperator, allBuilder.build());
builder.addColumnStatistic(requiredColumnRefOperator, outputStatistic);
allBuilder.addColumnStatistic(requiredColumnRefOperator, outputStatistic);
}
context.setStatistics(builder.build());
return visitOperator(context.getOp(), context);
}
对于Projection来说,也是根据不同的 Projection 算子来做区别对待的(也是用ExpressionStatisticVisitor
类),如case when操作,函数操作等,这些对应的列的最大值和最小值都是不一样的