Thanks Leonard for the suggestion.
- Check that all POM files point to the same version
- Build the source with Maven
- Ran end-to-end tests locally and succeeded
- Tested read from and write to MongoDB by local cluster.
- Tested lookup table feature by local cluster.
Tested read from and write into MongoDB with steps comes up.
-- prepare test data of MongoDB
> use test;
> db.users.insertMany([
{ "user_id": NumberLong(100), "user_name": "Bob", "region": "Beijing" },
{ "user_id": NumberLong(101), "user_name": "Alice", "region": "Shanghai" },
{ "user_id": NumberLong(102), "user_name": "Greg", "region": "Berlin" },
{ "user_id": NumberLong(103), "user_name": "Richard", "region": "Berlin" }
]);
-- register a MongoDB source which interpret MongoDB collection as a
append-only stream
CREATE TABLE users (
user_id BIGINT,
user_name STRING,
region STRING
) WITH (
'connector' = 'mongodb',
'uri' = 'mongodb://username:password@localhost:27017',
'database' = 'test',
'collection' = 'users'
);
> SELECT * FROM users;
+---------+-----------+----------+
| user_id | user_name | region |
+---------+-----------+----------+
| 100 | Bob | Beijing |
| 101 | Alice | Shanghai |
| 102 | Greg | Berlin |
| 103 | Richard | Berlin |
+---------+-----------+----------+
-- register an MongoDB sink which will be used for storing latest users
information
CREATE TABLE users_snapshot (
user_id BIGINT,
user_name STRING,
region STRING,
PRIMARY KEY (user_id) NOT ENFORCED
) WITH (
'connector' = 'mongodb',
'uri' = 'mongodb://172.31.58.86:27017',
'database' = 'test',
'collection' = 'users_snapshot'
);
> INSERT INTO users_snapshot SELECT * FROM users;
> SELECT * FROM users_snapshot;
+---------+----------------+-----------+
| user_id | user_name | region |
+---------+-----------------+----------+
| 100 | Bob | Beijing |
| 101 | Alice | Shanghai |
| 102 | Greg | Berlin |
| 103 | Richard | Berlin |
+---------+-----------+----------+
-------------------------------------------------------------------
Tested lookup table with steps comes up.
-- prepare test data of MongoDB
> use test;
> db.pageviews.insertMany([
{ "user_id": NumberLong(100), "page_id": NumberLong(10001), viewtime:
Timestamp(1601510460, 0) },
{ "user_id": NumberLong(102), "page_id": NumberLong(10002), viewtime:
Timestamp(1601510520, 0) },
{ "user_id": NumberLong(101), "page_id": NumberLong(10002), viewtime:
Timestamp(1601510640, 0) },
{ "user_id": NumberLong(102), "page_id": NumberLong(10004), viewtime:
Timestamp(1601510760, 0) },
{ "user_id": NumberLong(102), "page_id": NumberLong(10003), viewtime:
Timestamp(1601510820, 0) }
]);
CREATE TABLE pageviews (
user_id BIGINT,
page_id BIGINT,
viewtime TIMESTAMP_LTZ(0),
proctime AS PROCTIME()
) WITH (
'connector' = 'mongodb',
'uri' = 'mongodb://username:password@localhost:27017',
'database' = 'test',
'collection' = 'pageviews'
);
> SET table.local-time-zone = UTC;
> SELECT * FROM pageviews;
+---------+--------------+-----------------------+----------+
| user_id | page_id | viewtime | proctime |
+---------+--------------+-----------------------+----------+
| 100 | 10001 | 2020-10-01 00:01:00 | ........ |
| 102 | 10002 | 2020-10-01 00:02:00 | ........ |
| 101 | 10002 | 2020-10-01 00:04:00 | ........ |
| 102 | 10004 | 2020-10-01 00:06:00 | ........ |
| 102 | 10003 | 2020-10-01 00:07:00 | ........ |
+---------+-----------+-------------------------------+-------+
CREATE TABLE pageviews_enriched (
user_id BIGINT,
page_id BIGINT,
viewtime TIMESTAMP_LTZ(0),
user_region STRING,
WATERMARK FOR viewtime AS viewtime - INTERVAL '2' SECOND
) WITH (
'connector' = 'mongodb',
'uri' = 'mongodb://username:password@localhost:27017',
'database' = 'test',
'collection' = 'pageviews_enriched'
);
INSERT INTO pageviews_enriched
SELECT p.user_id,
p.page_id,
p.viewtime,
u.region
FROM pageviews AS p
LEFT JOIN users FOR SYSTEM_TIME AS OF p.proctime AS u
ON p.user_id = u.user_id;
> SELECT * FROM pageviews_enriched;
+---------+--------------+----------------------+--------------------+
| user_id | page_id | viewtime | user_region |
+---------+--------------+----------------------+--------------------+
| 100 | 10001 | 2020-10-01 00:01:00 | Beijing |
| 102 | 10002 | 2020-10-01 00:02:00 | Berlin |
| 101 | 10002 | 2020-10-01 00:04:00 | Shanghai |
| 102 | 10004 | 2020-10-01 00:06:00 | Berlin |
| 102 | 10003 | 2020-10-01 00:07:00 | Berlin |
+---------+-----------+------------------------------+---------------+
Best,
Jiabao
> 2023年3月20日 下午4:38,Leonard Xu <[email protected]> 写道:
>
>
>> + 1 (non-binding)
>>
>> Best,
>> Jiaba
>
> Hi, Jiabao
>
> Thanks for help to verify the release candidate. But we need to do the
> verification firstly and then post the verified items when in a release
> candidate vote thread.
>
> The community does not require verification of all items in the document [1],
> each contributor can determine the items they need to verify based on the
> modules they are familiar with and the features they care about.
>
> Best,
> Leonard
>
> [1]https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Release
>
>
>
>
>
>
>>
>>
>>> 2023年3月17日 上午4:32,Martijn Visser <[email protected]> 写道:
>>>
>>> +1 (binding)
>>>
>>> - Validated hashes
>>> - Verified signature
>>> - Verified that no binaries exist in the source archive
>>> - Build the source with Maven
>>> - Verified licenses
>>> - Verified web PRs
>>>
>>> On Thu, Mar 16, 2023 at 11:21 AM Danny Cranmer <[email protected]>
>>> wrote:
>>>
>>>> Hi everyone,
>>>> Please review and vote on the release candidate # for the version , as
>>>> follows:
>>>> [ ] +1, Approve the release
>>>> [ ] -1, Do not approve the release (please provide specific comments)
>>>>
>>>>
>>>> The complete staging area is available for your review, which includes:
>>>> * JIRA release notes [1],
>>>> * the official Apache source release to be deployed to dist.apache.org
>>>> [2],
>>>> which are signed with the key with fingerprint
>>>> 0F79F2AFB2351BC29678544591F9C1EC125FD8DB [3],
>>>> * all artifacts to be deployed to the Maven Central Repository [4],
>>>> * source code tag v1.0.0-rc1 [5],
>>>> * website pull request listing the new release [6].
>>>>
>>>> Given this is a new connector I will follow-up with a blog, but it will not
>>>> block the release.
>>>>
>>>> The vote will be open for at least 72 hours (ending 2023-03-21 11:00 UTC).
>>>> It is adopted by majority approval, with at least 3 PMC affirmative votes.
>>>>
>>>> Thanks,
>>>> Danny
>>>>
>>>> [1]
>>>>
>>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352386
>>>> [2]
>>>>
>>>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.0.0-rc1/
>>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>>> [4] https://repository.apache.org/content/repositories/orgapacheflink-1598
>>>> [5]
>>>> https://github.com/apache/flink-connector-mongodb/releases/tag/v1.0.0-rc1
>>>> [6] https://github.com/apache/flink-web/pull/622
>>>>
>>