Hi All,
Other projects/Summits like Kafka and Spark offer add-on days to summits
for training. I'm wondering the appetite/interest for hands-on sessions
for working with Beam, and whether we think that'd be helpful. Are there
people that would benefit from a beginning with Beam day, or a more
Robert - you meant this as a mostly-automatic thing that we would engineer,
yes?
A lighter-weight fake, like using something in-process sharing a Java
interface (versus today a locally running service sharing an RPC interface)
is still much better than a mock.
Kenn
On Mon, Jan 21, 2019 at 7:17
Just some background, grpcio-tools is what's used to do the proto
generation. Unfortunately it's expensive to compile and doesn't
provide very many wheels, so we want to install it once, not every
time. (It's also used in more than just tests; one needs it every time
the .proto files change.)
Hi,
it makes sense to use embedded backend when:
1. it's possible to easily embed the backend
2. when the backend is "predictable".
If it's easy to embed and the backend behavior is predictable, then it
makes sense.
In other cases, we can fallback to mock.
Regards
JB
On 21/01/2019 10:07,
Thanks Robert for your answer.Grouping tests is a good idea, thanks for the
reminder. I'll use that if new flakiness
show up and if I have no countermeasures left :)
Etienne
Le lundi 21 janvier 2019 à 12:39 +0100, Robert Bradshaw a écrit :
> I am of the same opinion, this is the approach we're
High Priority Dependency Updates Of Beam Python SDK:
Dependency Name
Current Version
Latest Version
Release Date Of the Current Used Version
Release Date Of The Latest Release
JIRA Issue
future
0.16.0
0.17.1
2016-10-27
I am of the same opinion, this is the approach we're taking for Flink
as well. Various mitigations (e.g. capping the parallelism at 2 rather
than the default of num cores) have helped.
Several times the idea has been proposed to group unit tests together
for "expensive" backends. E.g. for
opened PR https://github.com/apache/beam/pull/7580 adding target to
spotless task
At least fixes it on my machine. Includes/excludes might need discussion,
though.
On Mon, Jan 21, 2019 at 1:37 AM Michael Luckey wrote:
> Unfortunately, this is still broken. It seems, that antlrplugin itself is
Dear all,
It happened again on Friday morning:
You can see a baseline in the connection amount from the 16th to the 18th.
Looking at the pg_stat_activity, all connections are used, even when the
pipeline is not used at 100 % (my use case is processing data from a
platform which is not used a
Hi guys,
Lately I have been fixing various Elasticsearch flakiness issues in the UTests
by: introducing timeouts, countdown
latches, force refresh, embedded cluster size decrease ...
These flakiness issues are due to the embedded Elasticsearch not coping well
with the jenkins overload. Still,
10 matches
Mail list logo