This is an automated email from the ASF dual-hosted git repository.

showuon pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
     new a39d447677 MINOR: fix streams tutorial (#12251)
a39d447677 is described below

commit a39d447677d2886c463bd11377896a0f7f5bf6dd
Author: Okada Haruki <[email protected]>
AuthorDate: Sat Jun 4 14:23:11 2022 +0900

    MINOR: fix streams tutorial (#12251)
    
    Reviewers: Luke Chen <[email protected]>
---
 docs/streams/tutorial.html | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/docs/streams/tutorial.html b/docs/streams/tutorial.html
index a526de568a..017d779682 100644
--- a/docs/streams/tutorial.html
+++ b/docs/streams/tutorial.html
@@ -452,7 +452,7 @@ source.flatMapValues(new ValueMapper&lt;String, 
Iterable&lt;String&gt;&gt;() {
     <p>
         Note that the <code>count</code> operator has a 
<code>Materialized</code> parameter that specifies that the
         running count should be stored in a state store named 
<code>counts-store</code>.
-        This <code>Counts</code> store can be queried in real-time, with 
details described in the <a 
href="/{{version}}/documentation/streams/developer-guide#streams_interactive_queries">Developer
 Manual</a>.
+        This <code>counts-store</code> store can be queried in real-time, with 
details described in the <a 
href="/{{version}}/documentation/streams/developer-guide#streams_interactive_queries">Developer
 Manual</a>.
     </p>
 
     <p>
@@ -490,10 +490,10 @@ Sub-topologies:
     Processor: KSTREAM-FLATMAPVALUES-0000000001(stores: []) --> 
KSTREAM-KEY-SELECT-0000000002 <-- KSTREAM-SOURCE-0000000000
     Processor: KSTREAM-KEY-SELECT-0000000002(stores: []) --> 
KSTREAM-FILTER-0000000005 <-- KSTREAM-FLATMAPVALUES-0000000001
     Processor: KSTREAM-FILTER-0000000005(stores: []) --> 
KSTREAM-SINK-0000000004 <-- KSTREAM-KEY-SELECT-0000000002
-    Sink: KSTREAM-SINK-0000000004(topic: Counts-repartition) <-- 
KSTREAM-FILTER-0000000005
+    Sink: KSTREAM-SINK-0000000004(topic: counts-store-repartition) <-- 
KSTREAM-FILTER-0000000005
   Sub-topology: 1
-    Source: KSTREAM-SOURCE-0000000006(topics: Counts-repartition) --> 
KSTREAM-AGGREGATE-0000000003
-    Processor: KSTREAM-AGGREGATE-0000000003(stores: [Counts]) --> 
KTABLE-TOSTREAM-0000000007 <-- KSTREAM-SOURCE-0000000006
+    Source: KSTREAM-SOURCE-0000000006(topics: counts-store-repartition) --> 
KSTREAM-AGGREGATE-0000000003
+    Processor: KSTREAM-AGGREGATE-0000000003(stores: [counts-store]) --> 
KTABLE-TOSTREAM-0000000007 <-- KSTREAM-SOURCE-0000000006
     Processor: KTABLE-TOSTREAM-0000000007(stores: []) --> 
KSTREAM-SINK-0000000008 <-- KSTREAM-AGGREGATE-0000000003
     Sink: KSTREAM-SINK-0000000008(topic: streams-wordcount-output) <-- 
KTABLE-TOSTREAM-0000000007
 Global Stores:
@@ -501,14 +501,14 @@ Global Stores:
 
     <p>
         As we can see above, the topology now contains two disconnected 
sub-topologies.
-        The first sub-topology's sink node 
<code>KSTREAM-SINK-0000000004</code> will write to a repartition topic 
<code>Counts-repartition</code>,
+        The first sub-topology's sink node 
<code>KSTREAM-SINK-0000000004</code> will write to a repartition topic 
<code>counts-store-repartition</code>,
         which will be read by the second sub-topology's source node 
<code>KSTREAM-SOURCE-0000000006</code>.
         The repartition topic is used to "shuffle" the source stream by its 
aggregation key, which is in this case the value string.
         In addition, inside the first sub-topology a stateless 
<code>KSTREAM-FILTER-0000000005</code> node is injected between the grouping 
<code>KSTREAM-KEY-SELECT-0000000002</code> node and the sink node to filter out 
any intermediate record whose aggregate key is empty.
     </p>
     <p>
-        In the second sub-topology, the aggregation node 
<code>KSTREAM-AGGREGATE-0000000003</code> is associated with a state store 
named <code>Counts</code> (the name is specified by the user in the 
<code>count</code> operator).
-        Upon receiving each record from its upcoming stream source node, the 
aggregation processor will first query its associated <code>Counts</code> store 
to get the current count for that key, augment by one, and then write the new 
count back to the store.
+        In the second sub-topology, the aggregation node 
<code>KSTREAM-AGGREGATE-0000000003</code> is associated with a state store 
named <code>counts-store</code> (the name is specified by the user in the 
<code>count</code> operator).
+        Upon receiving each record from its upcoming stream source node, the 
aggregation processor will first query its associated <code>counts-store</code> 
store to get the current count for that key, augment by one, and then write the 
new count back to the store.
         Each updated count for the key will also be piped downstream to the 
<code>KTABLE-TOSTREAM-0000000007</code> node, which interpret this update 
stream as a record stream before further piping to the sink node 
<code>KSTREAM-SINK-0000000008</code> for writing back to Kafka.
     </p>
 

Reply via email to