package com.lyzx.day32
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
class T1 {
/**
* Transform Operation
*
* The transform operation (along with its variations like transformWith) allows arbitrary
* RDD-to-RDD functions to be applied on a DStream.
* It can be used to apply any RDD operation that is not exposed in the DStream API. For example,
* the functionality of joining every batch in a data stream with another dataset is not directly exposed
* in the DStream API. However, you can easily use transform to do this.
* This enables very powerful possibilities.
* For example, one can do real-time data cleaning by joining the input data stream with
* precomputed spam information (maybe generated with Spark as well) and then filtering based on it.
*
* Transform 操作允许任意的RDD到RDD的函数被应用于DStream
* 它可以应用任何能在RDD上应用的函数而不暴露DStream的API
*
《深入理解Spark》之Transform、foreachRDD、updateStateByKey以及reduceByKeyAndWindow
最新推荐文章于 2025-04-04 16:14:45 发布