测试检查点
代码
这是一个简单的 flink 应用程序,它使用具有 Integer
托管状态的有状态映射器。你可以使用 checkpointEnable
,checkpointInterval
和 checkpointMode
变量来查看它们的效果:
public class CheckpointExample {
private static Logger LOG = LoggerFactory.getLogger(CheckpointExample.class);
private static final String KAFKA_BROKER = "localhost:9092";
private static final String KAFKA_INPUT_TOPIC = "input-topic";
private static final String KAFKA_GROUP_ID = "flink-stackoverflow-checkpointer";
private static final String CLASS_NAME = CheckpointExample.class.getSimpleName();
public static void main(String[] args) throws Exception {
// play with them
boolean checkpointEnable = false;
long checkpointInterval = 1000;
CheckpointingMode checkpointMode = CheckpointingMode.EXACTLY_ONCE;
// ----------------------------------------------------
LOG.info(CLASS_NAME + ": starting...");
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// kafka source
// https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/connectors/kafka.html#kafka-consumer
Properties prop = new Properties();
prop.put("bootstrap.servers", KAFKA_BROKER);
prop.put("group.id", KAFKA_GROUP_ID);
prop.put("auto.offset.reset", "latest");
prop.put("enable.auto.commit", "false");
FlinkKafkaConsumer09<String> source = new FlinkKafkaConsumer09<>(
KAFKA_INPUT_TOPIC, new SimpleStringSchema(), prop);
// checkpoints
// internals: https://ci.apache.org/projects/flink/flink-docs-master/internals/stream_checkpointing.html#checkpointing
// config: https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/stream/checkpointing.html
if (checkpointEnable) env.enableCheckpointing(checkpointInterval, checkpointMode);
env
.addSource(source)
.keyBy((any) -> 1)
.flatMap(new StatefulMapper())
.print();
env.execute(CLASS_NAME);
}
/* *****************************************************************
* Stateful mapper
* (cf. https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/stream/state.html)
* ****************************************************************/
public static class StatefulMapper extends RichFlatMapFunction<String, String> {
private transient ValueState<Integer> state;
@Override
public void flatMap(String record, Collector<String> collector) throws Exception {
// access the state value
Integer currentState = state.value();
// update the counts
currentState += 1;
collector.collect(String.format("%s: (%s,%d)",
LocalDateTime.now().format(ISO_LOCAL_DATE_TIME), record, currentState));
// update the state
state.update(currentState);
}
@Override
public void open(Configuration parameters) throws Exception {
ValueStateDescriptor<Integer> descriptor =
new ValueStateDescriptor<>("CheckpointExample", TypeInformation.of(Integer.class), 0);
state = getRuntimeContext().getState(descriptor);
}
}
}
运行示例并模拟失败
为了能够检查检查点,你需要启动一个 cluster
。更简单的方法是在 flink/bin
目录中使用 start-cluster.sh
脚本:
start-cluster.sh
Starting cluster.
[INFO] 1 instance(s) of jobmanager are already running on virusnest.
Starting jobmanager daemon on host virusnest.
Password:
Starting taskmanager daemon on host virusnest.
现在,打包你的应用并将其提交给 flink:
mvn clean package
flink run target/flink-checkpoints-test.jar -c CheckpointExample
创建一些数据:
kafka-console-producer --broker-list localhost:9092 --topic input-topic
a
b
c
^D
输出应该在 flink/logs/flink-<user>-jobmanager-0-<host>.out
中提供。例如:
tail -f flink/logs/flink-Derlin-jobmanager-0-virusnest.out
2017-03-17T08:21:51.249: (a,1)
2017-03-17T08:21:51.545: (b,2)
2017-03-17T08:21:52.363: (c,3)
要测试检查点,只需终止任务管理器(这将模拟失败),生成一些数据并启动一个新数据:
# killing the taskmanager
ps -ef | grep -i taskmanager
kill <taskmanager PID>
# starting a new taskmanager
flink/bin/taskmanager.sh start
注意: 当启动一个新的 taskmanager 时,它将使用另一个日志文件,即 flink/logs/flink-<user>-jobmanager-1-<host>.out
(注意整数增量)。
会发生什么
- 禁用检查点 :如果你在失败期间生成数据,它们肯定会丢失。但令人惊讶的是,柜台将是正确的!
- 检查点已启用 :不再有数据丢失(以及正确的计数器)。
- 具有至少一次模式的检查点 :你可能会看到重复项,特别是如果你将检查点间隔设置为较大的数字并多次终止任务管理器