xintongsong commented on code in PR #158:
URL: https://github.com/apache/flink-agents/pull/158#discussion_r2339434490


##########
python/flink_agents/examples/quickstart/agents/product_suggestion_agent.py:
##########
@@ -0,0 +1,177 @@
+################################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+#################################################################################
+import json
+import logging
+from typing import Any, Dict, List, Tuple, Type
+
+from pydantic import BaseModel
+
+from flink_agents.api.agent import Agent
+from flink_agents.api.chat_message import ChatMessage, MessageRole
+from flink_agents.api.chat_models.chat_model import (
+    BaseChatModelConnection,
+    BaseChatModelSetup,
+)
+from flink_agents.api.decorators import (
+    action,
+    chat_model_connection,
+    chat_model_setup,
+    prompt,
+)
+from flink_agents.api.events.chat_event import ChatRequestEvent, 
ChatResponseEvent
+from flink_agents.api.events.event import InputEvent, OutputEvent
+from flink_agents.api.prompts.prompt import Prompt
+from flink_agents.api.runner_context import RunnerContext
+from flink_agents.integrations.chat_models.ollama_chat_model import (
+    OllamaChatModelConnection,
+    OllamaChatModelSetup,
+)
+
+
+class ProductReviewSummary(BaseModel):
+    """Aggregates multiple reviews and insights using LLM for a product.
+
+    Attributes:
+        id (int): The unique identifier of the product.
+        score_hist (List[str]): A collection of rating scores from various 
reviews.
+        unsatisfied_reasons (List[str]): A list of reasons or insights 
generated by LLM
+            to explain the rating.
+    """
+
+    id: int
+    score_hist: List[str]
+    unsatisfied_reasons: List[str]
+
+
+class ProductSuggestion(BaseModel):
+    """Provides a summary of review data including suggestions for improvement.
+
+    Attributes:
+        id (int): The unique identifier of the product.
+        score_histogram (List[int]): A collection of rating scores from 
various reviews.
+        suggestions (List[str]): Suggestions or recommendations generated as a 
result of
+            review analysis.
+    """
+
+    id: int
+    score_hist: List[str]
+    suggestions: List[str]
+
+
+class ProductSuggestionAgent(Agent):
+    """An agent that uses a large language model (LLM) to generate actionable 
product
+    improvement suggestions from aggregated product review data.
+
+    This agent receives a summary of product reviews, including a rating 
distribution
+    and a list of user dissatisfaction reasons, and produces concrete 
suggestions for
+    product enhancement. It handles prompt construction, LLM interaction, and 
output
+    parsing.
+    """
+
+    @prompt
+    @staticmethod
+    def generate_suggestion_prompt() -> Prompt:
+        """Generate product suggestions based on the rating distribution and 
user
+        dissatisfaction reasons.
+        """
+        prompt_str = """
+        Based on the rating distribution and user dissatisfaction reasons, 
generate three actionable suggestions for product improvement.
+
+        Input format:
+        {{
+            "id": "1",
+            "score_histogram": ["10%", "20%", "10%", "15%", "45%"],
+            "unsatisfied_reasons": ["reason1", "reason2", "reason3"]
+        }}
+
+        Ensure that your response can be parsed by Python json,use the 
following format as an example:
+        {{
+            "suggestion_list": [
+                "suggestion1",
+                "suggestion2",
+                "suggestion3"
+            ]
+        }}
+
+        input:
+        {input}
+        """
+        return Prompt.from_text("generate_suggestion_prompt", prompt_str)
+
+    @chat_model_connection
+    @staticmethod
+    def ollama_server() -> Tuple[Type[BaseChatModelConnection], Dict[str, 
Any]]:
+        """ChatModelServer responsible for model service connection."""
+        return OllamaChatModelConnection, {
+            "name": "ollama_server",
+            "model": "qwen2.5:7b",
+            "request_timeout": 120,
+        }

Review Comment:
   Is this identical to the model connection used in the other agent? We might 
show how to reuse resources with this example.



##########
python/flink_agents/examples/quickstart/product_review_analysis.py:
##########
@@ -0,0 +1,96 @@
+################################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+#################################################################################
+from pathlib import Path
+
+from pyflink.common import Duration, WatermarkStrategy
+from pyflink.datastream import KeySelector, StreamExecutionEnvironment
+from pyflink.datastream.connectors.file_system import FileSource, StreamFormat
+
+from flink_agents.api.execution_environment import AgentsExecutionEnvironment
+from flink_agents.examples.quickstart.agents.review_analysis_agent import (
+    ProductReview,
+    ReviewAnalysisAgent,
+)
+
+current_dir = Path(__file__).parent
+
+
+class MyKeySelector(KeySelector):
+    """KeySelector for extracting key."""
+
+    def get_key(self, value: ProductReview) -> int:
+        """Extract key from ItemData."""
+        return value.id
+
+
+def main() -> None:
+    """Main function for the product review analysis quickstart example.
+
+    This example demonstrates how to use Flink Agents to analyze product 
reviews in a
+    streaming pipeline. The pipeline reads product reviews from a file, 
deserializes
+    each review, and uses an LLM agent to extract review scores and 
unsatisfied reasons.
+    The results are printed to stdout. This serves as a minimal, end-to-end 
example of
+    integrating LLM-powered agents with Flink streaming jobs.
+    """
+    # Set up the Flink streaming environment and the Agents execution 
environment.
+    env = StreamExecutionEnvironment.get_execution_environment()
+    agents_env = AgentsExecutionEnvironment.get_execution_environment(env=env)
+
+    # Add required flink-agents jars to the environment.
+    env.add_jars(
+        
f"file:///{current_dir}/../../../../runtime/target/flink-agents-runtime-0.1-SNAPSHOT.jar"
+    )
+    env.add_jars(
+        
f"file:///{current_dir}/../../../../plan/target/flink-agents-plan-0.1-SNAPSHOT.jar"
+    )
+    env.add_jars(
+        
f"file:///{current_dir}/../../../../api/target/flink-agents-api-0.1-SNAPSHOT.jar"
+    )

Review Comment:
   We should not require manually adding the framework jars in user code. 
Ideally, we should have everything needed by flink-agents as a uber jar, and 
let user place it into the flink `lib/` directory.



##########
python/flink_agents/examples/quickstart/product_review_analysis.py:
##########
@@ -0,0 +1,96 @@
+################################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+#################################################################################
+from pathlib import Path
+
+from pyflink.common import Duration, WatermarkStrategy
+from pyflink.datastream import KeySelector, StreamExecutionEnvironment
+from pyflink.datastream.connectors.file_system import FileSource, StreamFormat
+
+from flink_agents.api.execution_environment import AgentsExecutionEnvironment
+from flink_agents.examples.quickstart.agents.review_analysis_agent import (
+    ProductReview,
+    ReviewAnalysisAgent,
+)
+
+current_dir = Path(__file__).parent
+
+
+class MyKeySelector(KeySelector):
+    """KeySelector for extracting key."""
+
+    def get_key(self, value: ProductReview) -> int:
+        """Extract key from ItemData."""
+        return value.id
+
+
+def main() -> None:
+    """Main function for the product review analysis quickstart example.
+
+    This example demonstrates how to use Flink Agents to analyze product 
reviews in a
+    streaming pipeline. The pipeline reads product reviews from a file, 
deserializes
+    each review, and uses an LLM agent to extract review scores and 
unsatisfied reasons.
+    The results are printed to stdout. This serves as a minimal, end-to-end 
example of
+    integrating LLM-powered agents with Flink streaming jobs.
+    """
+    # Set up the Flink streaming environment and the Agents execution 
environment.
+    env = StreamExecutionEnvironment.get_execution_environment()
+    agents_env = AgentsExecutionEnvironment.get_execution_environment(env=env)

Review Comment:
   Minor: does this work?
   ```suggestion
       agents_env = AgentsExecutionEnvironment.get_execution_environment(env)
   ```



##########
python/flink_agents/examples/quickstart/agents/product_suggestion_agent.py:
##########
@@ -0,0 +1,177 @@
+################################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+#################################################################################
+import json
+import logging
+from typing import Any, Dict, List, Tuple, Type
+
+from pydantic import BaseModel
+
+from flink_agents.api.agent import Agent
+from flink_agents.api.chat_message import ChatMessage, MessageRole
+from flink_agents.api.chat_models.chat_model import (
+    BaseChatModelConnection,
+    BaseChatModelSetup,
+)
+from flink_agents.api.decorators import (
+    action,
+    chat_model_connection,
+    chat_model_setup,
+    prompt,
+)
+from flink_agents.api.events.chat_event import ChatRequestEvent, 
ChatResponseEvent
+from flink_agents.api.events.event import InputEvent, OutputEvent
+from flink_agents.api.prompts.prompt import Prompt
+from flink_agents.api.runner_context import RunnerContext
+from flink_agents.integrations.chat_models.ollama_chat_model import (
+    OllamaChatModelConnection,
+    OllamaChatModelSetup,
+)
+
+
+class ProductReviewSummary(BaseModel):
+    """Aggregates multiple reviews and insights using LLM for a product.
+
+    Attributes:
+        id (int): The unique identifier of the product.
+        score_hist (List[str]): A collection of rating scores from various 
reviews.
+        unsatisfied_reasons (List[str]): A list of reasons or insights 
generated by LLM
+            to explain the rating.
+    """
+
+    id: int
+    score_hist: List[str]
+    unsatisfied_reasons: List[str]
+
+
+class ProductSuggestion(BaseModel):
+    """Provides a summary of review data including suggestions for improvement.
+
+    Attributes:
+        id (int): The unique identifier of the product.
+        score_histogram (List[int]): A collection of rating scores from 
various reviews.
+        suggestions (List[str]): Suggestions or recommendations generated as a 
result of
+            review analysis.
+    """
+
+    id: int
+    score_hist: List[str]
+    suggestions: List[str]
+
+
+class ProductSuggestionAgent(Agent):
+    """An agent that uses a large language model (LLM) to generate actionable 
product
+    improvement suggestions from aggregated product review data.
+
+    This agent receives a summary of product reviews, including a rating 
distribution
+    and a list of user dissatisfaction reasons, and produces concrete 
suggestions for
+    product enhancement. It handles prompt construction, LLM interaction, and 
output
+    parsing.
+    """
+
+    @prompt
+    @staticmethod
+    def generate_suggestion_prompt() -> Prompt:
+        """Generate product suggestions based on the rating distribution and 
user
+        dissatisfaction reasons.
+        """
+        prompt_str = """
+        Based on the rating distribution and user dissatisfaction reasons, 
generate three actionable suggestions for product improvement.
+
+        Input format:
+        {{
+            "id": "1",
+            "score_histogram": ["10%", "20%", "10%", "15%", "45%"],
+            "unsatisfied_reasons": ["reason1", "reason2", "reason3"]
+        }}
+
+        Ensure that your response can be parsed by Python json,use the 
following format as an example:
+        {{
+            "suggestion_list": [
+                "suggestion1",
+                "suggestion2",
+                "suggestion3"
+            ]
+        }}
+
+        input:
+        {input}
+        """
+        return Prompt.from_text("generate_suggestion_prompt", prompt_str)
+
+    @chat_model_connection
+    @staticmethod
+    def ollama_server() -> Tuple[Type[BaseChatModelConnection], Dict[str, 
Any]]:
+        """ChatModelServer responsible for model service connection."""
+        return OllamaChatModelConnection, {
+            "name": "ollama_server",
+            "model": "qwen2.5:7b",
+            "request_timeout": 120,
+        }
+
+    @chat_model_setup
+    @staticmethod
+    def generate_suggestion_model() -> Tuple[Type[BaseChatModelSetup], 
Dict[str, Any]]:
+        """ChatModel which focus on generating product suggestions."""
+        return OllamaChatModelSetup, {
+            "name": "generate_suggestion_model",
+            "connection": "ollama_server",
+            "prompt": "generate_suggestion_prompt",
+            "extract_reasoning": True,
+        }
+
+    @action(InputEvent)
+    @staticmethod
+    def process_input(event: InputEvent, ctx: RunnerContext) -> None:
+        """Process input event."""
+        input: ProductReviewSummary = event.input
+        ctx.get_short_term_memory().set("id", input.id)

Review Comment:
   Minor: Would this be better?
   ```suggestion
           ctx.short_term_memory().set("id", input.id)
   ```



##########
python/flink_agents/examples/quickstart/agents/review_analysis_agent.py:
##########
@@ -0,0 +1,167 @@
+################################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+#################################################################################
+import json
+import logging
+from typing import Any, Dict, Tuple, Type
+
+from pydantic import BaseModel
+
+from flink_agents.api.agent import Agent
+from flink_agents.api.chat_message import ChatMessage, MessageRole
+from flink_agents.api.chat_models.chat_model import (
+    BaseChatModelConnection,
+    BaseChatModelSetup,
+)
+from flink_agents.api.decorators import (
+    action,
+    chat_model_connection,
+    chat_model_setup,
+    prompt,
+)
+from flink_agents.api.events.chat_event import ChatRequestEvent, 
ChatResponseEvent
+from flink_agents.api.events.event import InputEvent, OutputEvent
+from flink_agents.api.prompts.prompt import Prompt
+from flink_agents.api.runner_context import RunnerContext
+from flink_agents.integrations.chat_models.ollama_chat_model import (
+    OllamaChatModelConnection,
+    OllamaChatModelSetup,
+)
+
+
+class ProductReview(BaseModel):
+    """Data model representing a product review.
+
+    Attributes:
+    ----------
+    id : int
+        The unique identifier for the product being reviewed.
+    review : str
+        The review of the product.
+    """
+
+    id: int
+    review: str
+
+
+class ProductReviewAnalysisRes(BaseModel):
+    """Data model representing analysis result of a product review.
+
+    Attributes:
+    ----------
+    id : int
+        The unique identifier for the product being reviewed.
+    score : int
+        The satisfaction score given by the reviewer.
+    reasons : List[str]
+        A list of reasons provided by the reviewer for dissatisfaction, if any.
+    """
+
+    id: int
+    score: int
+    reasons: list[str]
+
+
+class ReviewAnalysisAgent(Agent):
+    """An agent that uses a large language model (LLM) to analyze product 
reviews
+    and generate a satisfaction score and potential reasons for 
dissatisfaction.
+
+    This agent receives a product review and produces a satisfaction score and 
a list
+    of reasons for dissatisfaction. It handles prompt construction, LLM 
interaction,
+    and output parsing.
+    """
+
+    @prompt
+    @staticmethod
+    def review_analysis_prompt() -> Prompt:
+        """Prompt for review analysis."""
+        prompt_str = """
+    Analyze the user review and product information to determine a
+    satisfaction score (1-5) and potential reasons for dissatisfaction.
+
+    Example input format:
+    {{
+        "id": "12345",
+        "review": "The headphones broke after one week of use. Very poor 
quality."
+    }}
+
+    Ensure your response can be parsed by Python JSON, using this format as an 
example:
+    {{
+     "score": 1,
+     "reasons": [
+       "poor quality"
+       ]
+    }}
+
+    input: {input}
+    """
+        return Prompt.from_text("review_analysis_prompt", prompt_str)
+
+    @chat_model_connection
+    @staticmethod
+    def ollama_server() -> Tuple[Type[BaseChatModelConnection], Dict[str, 
Any]]:
+        """ChatModelServer responsible for model service connection."""
+        return OllamaChatModelConnection, {
+            "name": "ollama_server",
+            "model": "qwen2.5:7b",
+            "request_timeout": 120,
+        }
+
+    @chat_model_setup
+    @staticmethod
+    def review_analysis_model() -> Tuple[Type[BaseChatModelSetup], Dict[str, 
Any]]:
+        """ChatModel which focus on review analysis."""
+        return OllamaChatModelSetup, {
+            "name": "review_analysis_model",
+            "connection": "ollama_server",
+            "prompt": "review_analysis_prompt",
+            "extract_reasoning": True,
+        }
+
+    @action(InputEvent)
+    @staticmethod
+    def process_input(event: InputEvent, ctx: RunnerContext) -> None:
+        """Process input event and send chat request for review analysis."""
+        input: ProductReview = event.input
+        ctx.get_short_term_memory().set("id", input.id)
+
+        content = f"""
+            "id": {input.id},
+            "review": {input.review}
+        """
+        msg = ChatMessage(role=MessageRole.USER, extra_args={"input": content})
+        ctx.send_event(ChatRequestEvent(model="review_analysis_model", 
messages=[msg]))
+
+    @action(ChatResponseEvent)
+    @staticmethod
+    def process_chat_response(event: ChatResponseEvent, ctx: RunnerContext) -> 
None:
+        """Process chat response event and send output event."""
+        try:
+            json_content = json.loads(event.response.content)
+            ctx.send_event(
+                OutputEvent(
+                    output=ProductReviewAnalysisRes(
+                        id=ctx.get_short_term_memory().get("id"),
+                        score=json_content["score"],
+                        reasons=json_content["reasons"],
+                    )
+                )
+            )
+        except Exception as e:
+            logging.exception(
+                f"Error processing chat response {event.response.content}", 
exc_info=e
+            )

Review Comment:
   How can a user fail the agent?



##########
python/flink_agents/examples/quickstart/product_review_analysis.py:
##########
@@ -0,0 +1,96 @@
+################################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+#################################################################################
+from pathlib import Path
+
+from pyflink.common import Duration, WatermarkStrategy
+from pyflink.datastream import KeySelector, StreamExecutionEnvironment
+from pyflink.datastream.connectors.file_system import FileSource, StreamFormat
+
+from flink_agents.api.execution_environment import AgentsExecutionEnvironment
+from flink_agents.examples.quickstart.agents.review_analysis_agent import (
+    ProductReview,
+    ReviewAnalysisAgent,
+)
+
+current_dir = Path(__file__).parent
+
+
+class MyKeySelector(KeySelector):
+    """KeySelector for extracting key."""
+
+    def get_key(self, value: ProductReview) -> int:
+        """Extract key from ItemData."""
+        return value.id
+
+
+def main() -> None:
+    """Main function for the product review analysis quickstart example.
+
+    This example demonstrates how to use Flink Agents to analyze product 
reviews in a
+    streaming pipeline. The pipeline reads product reviews from a file, 
deserializes
+    each review, and uses an LLM agent to extract review scores and 
unsatisfied reasons.
+    The results are printed to stdout. This serves as a minimal, end-to-end 
example of
+    integrating LLM-powered agents with Flink streaming jobs.
+    """
+    # Set up the Flink streaming environment and the Agents execution 
environment.
+    env = StreamExecutionEnvironment.get_execution_environment()
+    agents_env = AgentsExecutionEnvironment.get_execution_environment(env=env)
+
+    # Add required flink-agents jars to the environment.
+    env.add_jars(
+        
f"file:///{current_dir}/../../../../runtime/target/flink-agents-runtime-0.1-SNAPSHOT.jar"
+    )
+    env.add_jars(
+        
f"file:///{current_dir}/../../../../plan/target/flink-agents-plan-0.1-SNAPSHOT.jar"
+    )
+    env.add_jars(
+        
f"file:///{current_dir}/../../../../api/target/flink-agents-api-0.1-SNAPSHOT.jar"
+    )
+
+    # Read product reviews from a text file as a streaming source.
+    # Each line in the file should be a JSON string representing a 
ProductReview.
+    product_review_stream = env.from_source(
+        source=FileSource.for_record_stream_format(
+            StreamFormat.text_line_format(), f"file:///{current_dir}/resources"
+        )
+        .monitor_continuously(Duration.of_minutes(1))
+        .build(),
+        watermark_strategy=WatermarkStrategy.no_watermarks(),
+        source_name="streaming_agent_example",
+    ).map(
+        lambda x: ProductReview.model_validate_json(x)  # Deserialize JSON to 
ProductReview.
+    )
+
+    # Use the ReviewAnalysisAgent to analyze each product review.
+    review_analysis_res_stream = (
+        agents_env.from_datastream(
+            input=product_review_stream, key_selector=MyKeySelector()

Review Comment:
   Is there any way we can set a key selector without defining a custom class? 
Like a java lambda function?



##########
python/flink_agents/examples/quickstart/product_improve_suggestion.py:
##########
@@ -0,0 +1,164 @@
+################################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+#################################################################################
+from pathlib import Path
+from typing import Iterable, Union
+
+from pyflink.common import Duration, Time, WatermarkStrategy
+from pyflink.datastream import (
+    KeySelector,
+    ProcessWindowFunction,
+    StreamExecutionEnvironment,
+)
+from pyflink.datastream.connectors.file_system import FileSource, StreamFormat
+from pyflink.datastream.window import TumblingProcessingTimeWindows
+
+from flink_agents.api.execution_environment import AgentsExecutionEnvironment
+from flink_agents.examples.quickstart.agents.product_suggestion_agent import (
+    ProductReviewSummary,
+    ProductSuggestionAgent,
+)
+from flink_agents.examples.quickstart.agents.review_analysis_agent import (
+    ProductReview,
+    ProductReviewAnalysisRes,
+    ReviewAnalysisAgent,
+)
+
+current_dir = Path(__file__).parent
+
+
+class MyKeySelector(KeySelector):
+    """KeySelector for extracting key."""
+
+    def get_key(self, value: Union[ProductReview, ProductReviewSummary]) -> 
int:
+        """Extract key from ItemData."""
+        return value.id
+
+
+class AggregateScoreDistributionAndDislikeReasons(ProcessWindowFunction):
+    """Aggregate score distribution and dislike reasons."""
+
+    def process(
+        self,
+        key: int,
+        context: "ProcessWindowFunction.Context",
+        elements: Iterable[ProductReviewAnalysisRes],
+    ) -> Iterable[ProductReviewSummary]:
+        """Aggregate score distribution and dislike reasons."""
+        rating_counts = [0 for _ in range(5)]
+        reason_list = []
+        for element in elements:
+            rating = element.score
+            if 1 <= rating <= 5:
+                rating_counts[rating - 1] += 1
+            reason_list = reason_list + element.reasons
+        total = sum(rating_counts)
+        percentages = [round((x / total) * 100, 1) for x in rating_counts]
+        formatted_percentages = [f"{p}%" for p in percentages]
+        return [
+            ProductReviewSummary(
+                id=key,
+                score_hist=formatted_percentages,
+                unsatisfied_reasons=reason_list,
+            )
+        ]
+
+
+def main() -> None:
+    """Main function for the product improvement suggestion quickstart example.
+
+    This example demonstrates a multi-stage streaming pipeline using Flink 
Agents:
+      1. Reads product reviews from a text file as a streaming source.
+      2. Uses an LLM agent to analyze each review and extract score and 
unsatisfied
+         reasons.
+      3. Aggregates the analysis results in 1-minute tumbling windows, 
producing score
+         distributions and collecting all unsatisfied reasons.
+      4. Uses another LLM agent to generate product improvement suggestions 
based on the
+         aggregated analysis.
+      5. Prints the final suggestions to stdout.
+    """
+    # Set up the Flink streaming environment and the Agents execution 
environment.
+    env = StreamExecutionEnvironment.get_execution_environment()
+    agents_env = AgentsExecutionEnvironment.get_execution_environment(env=env)
+
+    # Add required flink-agents jars to the environment.
+    env.add_jars(
+        
f"file:///{current_dir}/../../../../runtime/target/flink-agents-runtime-0.1-SNAPSHOT.jar"
+    )
+    env.add_jars(
+        
f"file:///{current_dir}/../../../../plan/target/flink-agents-plan-0.1-SNAPSHOT.jar"
+    )
+    env.add_jars(
+        
f"file:///{current_dir}/../../../../api/target/flink-agents-api-0.1-SNAPSHOT.jar"
+    )
+
+    # Read product reviews from a text file as a streaming source.
+    # Each line in the file should be a JSON string representing a 
ProductReview.
+    product_review_stream = env.from_source(
+        source=FileSource.for_record_stream_format(
+            StreamFormat.text_line_format(),
+            f"file:///{current_dir}/resources/product_review.txt",
+        )
+        .monitor_continuously(Duration.of_minutes(1))
+        .build(),
+        watermark_strategy=WatermarkStrategy.no_watermarks(),
+        source_name="streaming_agent_example",
+    ).map(
+        lambda x: ProductReview.model_validate_json(
+            x
+        )  # Deserialize JSON to ProductReview.
+    )
+
+    # Use the ReviewAnalysisAgent (LLM) to analyze each review.
+    # The agent extracts the review score and unsatisfied reasons.
+    review_analysis_res_stream = (
+        agents_env.from_datastream(
+            input=product_review_stream, key_selector=MyKeySelector()
+        )
+        .apply(ReviewAnalysisAgent())
+        .to_datastream()
+    )
+
+    # Aggregate the analysis results in 1-minute tumbling windows.
+    # This produces a score distribution and collects all unsatisfied reasons 
for each
+    # product.
+    aggregated_analysis_res_stream = (
+        review_analysis_res_stream.key_by(lambda x: x.id)
+        .window(TumblingProcessingTimeWindows.of(Time.minutes(1)))
+        .process(AggregateScoreDistributionAndDislikeReasons())
+    )

Review Comment:
   I wonder if we can convert the output of ReviewAnalysisAgent into a table, 
and apply some event-time windowing. The purpose is to show how to integrate 
the agent with not only datastream but also table api, and event-time is more 
frequently used by flink users.



##########
python/flink_agents/examples/quickstart/resources/product_review.txt:
##########
@@ -0,0 +1,34 @@
+{"id":1,"review":"Great product! Works perfectly and lasts a long time."}

Review Comment:
   IIUC, the data here is a simplified and selected version from the [Amazon 
Review Data](https://nijianmo.github.io/amazon/index.html). While the data set 
does not come with any license, it does require citing the original paper. We 
may either add a header in the file and skip it when reading from the agent, or 
add a notice file in the directory about this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to