Re: [grpc-io] Re: grpcio wheel build

2019-08-22 Thread 'Lidi Zheng' via grpc.io
Hi Ioannis,

This error doesn't look like a bug in the installation process.
This failure is in the compilation of grpcio-tools package.
Can you check the working directory for your IDE, and the relative path of
that file?
It seems somewhere in the settings is wrong.

Lidi

On Thu, Aug 22, 2019 at 1:08 AM Ioannis Vogiatzis Oikonomidis <
ioannis.vogiatzisoikonomi...@ansys.com> wrote:

> Yes the Core build succeeds.
>
>
>
> The command that fails is the following (it is running under a build agent)
>
> $(Build.BinariesDirectory)\python\CPython\$(PythonVersionShort)\winx64\Release\python\python.exe
> -VVV setup.py  --prefix=$(Build.SourcesDirectory) build_ext -c msvc -vvv ||
> goto :error
>
>
>
> Before calling the command I am modifying the PATH and adding the protobuf
> c-core build
>
> ** Visual Studio 2017 Developer Command Prompt v15.8.5
>
> ** Copyright (c) 2017 Microsoft Corporation
>
> **
>
> Environment initialized for: 'x64'
>
> Python 3.6.7 (heads/python_3.6.7_vs2017-dirty:cdc6a0e, May 28 2019,
> 15:50:17) [MSC v.1910 64 bit (AMD64)]
>
> Python 3.6.7 (heads/python_3.6.7_vs2017-dirty:cdc6a0e, May 28 2019,
> 15:50:17) [MSC v.1910 64 bit (AMD64)]
>
> Compiling grpc_tools\_protoc_compiler.pyx because it changed.
>
> Cythonizing grpc_tools\_protoc_compiler.pyx
>
> C:\Dev\tfs_agent\_work\57\b\python\CPython\3_6\winx64\Release\python\lib\site-packages\Cython\Compiler\Main.py:369:
> FutureWarning: Cython directive 'language_level' not set, using 2 for now
> (Py2). This will change in a later release! File:
> C:\Dev\tfs_agent\_work\57\s\tools\distrib\python\grpcio_tools\grpc_tools\_protoc_compiler.pyx
>
>   tree = Parsing.p_module(s, pxd, full_module_name)
>
> Traceback (most recent call last):
>
>   File "setup.py", line 209, in 
>
> package_data=package_data(),
>
>   File "setup.py", line 155, in package_data
>
> shutil.copy(source, target)
>
>   File
> "C:\Dev\tfs_agent\_work\57\b\python\CPython\3_6\winx64\Release\python\lib\shutil.py",
> line 241, in copy
>
> copyfile(src, dst, follow_symlinks=follow_symlinks)
>
>   File
> "C:\Dev\tfs_agent\_work\57\b\python\CPython\3_6\winx64\Release\python\lib\shutil.py",
> line 120, in copyfile
>
> with open(src, 'rb') as fsrc:
>
> FileNotFoundError: [Errno 2] No such file or directory:
> 'third_party\\protobuf\\src\\google\\protobuf\\wrappers.proto'
>
>
>
>
>
> *From:* 'Lidi Zheng' via grpc.io 
> *Sent:* Wednesday, 21 August 2019 22:45
> *To:* grpc.io 
> *Subject:* Re: [grpc-io] Re: grpcio wheel build
>
>
>
> Did the C-Core build succeed? Can you provide specific command and logs?
>
> On Wednesday, August 21, 2019 at 12:40:11 PM UTC-7, Ioannis Vogiatzis
> Oikonomidis wrote:
>
> Yes I did. The files are there.I checked.
>
> I more or less copy pasted the build scripts from the links
> --
>
> *From:* 'Lidi Zheng' via grpc.io 
> *Sent:* Wednesday, August 21, 2019 8:06:17 PM
> *To:* grpc.io 
> *Subject:* [grpc-io] Re: grpcio wheel build
>
>
>
> Did you pull in the submodules?
>
>
>
> git submodule update --init
>
> It looks like those third_party files are missing.
>
> On Friday, August 16, 2019 at 8:28:19 AM UTC-7, ioannis.vogia...@ansys.com
> wrote:
>
> I am trying to build grpcio against an already build protobuf version
>
>
>
> I am first building grpc core according to these instructions
>
>
> https://github.com/grpc/grpc/blob/e6cd312346655d9a936acfb97927dbcd35615623/test/distrib/cpp/run_distrib_test_cmake.bat
>
>
>
>
> then I am moving on to the wheel build according to these instructions
>
>
> https://github.com/grpc/grpc/blob/7f32b96e3d9093dff6f0584ad605a2f10a744ec8/tools/run_tests/artifacts/build_artifact_python.bat
>
>
>
> However the build keeps failing (with or without cython) with the
> following error
>
> [Errno 2] No such file or directory:
> 'third_party\\protobuf\\src\\google\\protobuf\\wrappers.proto'
>
>
>
> My build tooldchain is the following:
>
> - MSVC 2017
>
> - Python 3.6
>
> - setuptools  40.8.0
>
> - cython 0.29.13
>
>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "grpc.io" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/grpc-io/brh7Jj7Dso0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> grp...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/cd77608f-6628-40bd-a852-ce147e681738%40googlegroups.com
> 
> .
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "grpc.io" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/grpc-io/brh7Jj7Dso0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> grpc-io+unsubscr...@googlegroup

[grpc-io] Re: Grpc java retry documentation

2019-08-22 Thread ramachandran . satishkumar
Hi Penn, is there any update on this? We are seeing if we can start using 
the retry mechanism in our Java clients. 
Thanks.

On Tuesday, November 6, 2018 at 11:19:35 AM UTC-8, Penn (Dapeng) Zhang 
wrote:
>
> Retry is already implemented (except hedging support) in v1.16.1, but 
> there is no other document than the spec yet, because there are some 
> caveats for the time being: (1) users need manage to disable census stats 
> and tracing when enabling retry,(2) there are some caveats for using 
> service config. In next release, v1.17.0, enabling retry will automatically 
> disable census.
>
> As for DEADLINE_EXCEEDED, it does not make sense to retry at the library 
> level, because a retry attempt would immediately exceed the deadline of the 
> RPC. Users could implement application level retry for DEADLINE_EXCEEDED by 
> themselves by making a new RPC.
>
> On Monday, November 5, 2018 at 12:55:42 PM UTC-8, David M wrote:
>>
>> I am trying to find a document describing grpc java retry behavioral.
>> All I was able to find is this proposal : 
>> https://github.com/grpc/proposal/blob/master/A6-client-retries.md
>> It's not clear what is already implemented.
>>
>> Thanks!
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/fa4bb8f8-83c1-4549-bfc0-847f59124bca%40googlegroups.com.


[grpc-io] Re: L59 : Allow C++ standard library in gRPC Core Library

2019-08-22 Thread 'Esun Kim' via grpc.io
That's a good point. I'll update the doc to include more about 
platform-specific issues.

Main reason I didn't mention MacOS and Windows is that it's relatively easy 
to have this because both Windows and MacOS are more centralized. On 
Windows, as you said, static linking has been adapted to gRPC prebuilt 
libraries so mostly it shouldn't matter. On MacOS, libc++.so and 
libstdc++.so are preinstalled for years so it's fairly reasonable to assume 
that those are there.

On Tuesday, August 20, 2019 at 6:54:13 PM UTC-7, Christopher Warrington - 
MSFT wrote:
>
> On Tuesday, August 20, 2019 at 4:30:29 PM UTC-7, Esun Kim wrote:
>
> > This is the discussion thread for L59 - Allow C++ standard library in 
> gRPC
> > Core Library
>
> The biggest thing I see missing from this proposal is how it affects
> platforms other than Linux. There is no mention of macOS or Windows. I
> assume that only the intersection of the C++11 features supported in the
> manylinux1 standard library, MSVC 2015 and later, and similar on macOS can
> be used.
>
> For Windows, in particular, it may be worth mentioning that pre-compiled
> gRPC releases statically link the C++ runtime, so this proposal should not
> affect packaging or deployment. (This is the case for C#. Not sure about 
> how
> the other language wrappers work.)
>
> --
> Christopher Warrington
> Microsoft Corp.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/2a7acdac-1ae0-4991-8f4d-70e61907fa1f%40googlegroups.com.


[grpc-io] Re: Relationship between CompletionQueues and TheadManagers in sync servers

2019-08-22 Thread terence . lehuuphuong
Should have added I am working with C / C++

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/37265b8c-30d6-41b4-9941-609539eb8b5b%40googlegroups.com.


Re: [grpc-io] Build Process When Using Multitude Languages

2019-08-22 Thread Amit
Hello Eric!

Thanks for the reply. I did bumped into GAPIC while trying to understand
what the YAML files do in 'GoogleApi'. I reached the same conclusion as
you  mentioned - It do a lot more than we need it to do. It looks like
GAPIC is tailored specifically to what Google need (nothing wrong with
that). In case I won't find leaner way of doing what I'm aiming to do, I'll
dive into GAPIC - see if I can control some of the options - for example,
removing the uploading or adding another step that compile Google's gRPC
gateway plug-in.

Thanks!

On Wed, Aug 21, 2019 at 9:17 PM Eric Anderson  wrote:

> You may be interested in
> https://github.com/googleapis/gapic-generator/tree/master/rules_gapic .
> Granted, that is only really intended for googleapi's usage, but it is
> solving the same basic problem. Biggest issue is probably that it does a
> lot more than you are wanting. It creates a native package for various
> languages, which then can be uploaded to the appropriate language-specific
> repositories. You can find the .snip files in the repository, which are
> templates.
>
> On Mon, Aug 19, 2019 at 7:06 AM Joey  wrote:
>
>> Hello everyone :-)
>>
>>
>> The company I'm working with embraced gRPC a while ago (micro-service
>> architecture). The gRPC implementation we use in the official Java
>> implementation, and here's how our build process looks like:
>>
>> - Each micro-service has it own git repository.
>>
>> - If Micro-service (named '*A*') protobuf files depends on another
>> Micro-service (named '*B*') protobuf files to be compiled - upon
>> building Micro-service *A*, a Gradle plug-in will reach and grab
>> Micro-service *B* protobuf files.
>>
>> - When all the dependencies exists, the same Gradle plug-in will use
>> protoc to generate the gRPC stub and compile the micro-service* A*.
>> Additional steps like create Docker image and deploying the service also
>> happen.
>>
>> - Because some of our UI service using rest API, along with compiling the
>> stubs and service - we use the gRPC gateway to generate a REST API gateway,
>> along with Swagger JSON files, and deploy those separately.
>>
>> This worked well, but it suffer from two problems:
>>
>> 1. Each build require the project to reach external project in order to
>> get the latest Protobuf files, and this might take time.
>> 2. The protobuf code is being generated over and over. What would be
>> better, is to have a JAR already out-there for each Micro-service. So
>> Micro-service *A* can just reach and consume the Micro-service *B* Jar.
>>
>> Also lately - more people are embracing gRPC, and this including more
>> language like Python and Go. So a broader build process is needed to
>> support multitude of languages. Following Google foot-steps and using
>> "googleapi" repository as guideline, we decided to have a single repo that
>> will host all the company protobuf files. So building will now happen in a
>> single places rather than every project. Now what need to be done - is to
>> implement a unified solution to build generated protobuf code in multiple
>> languages, publish the artifacts (in package when possible like for example
>> JAR), build a gRPC gateway for each service and the Swagger files. Here's
>> two approach:
>>
>> 1. Create a basic 'non-language specific *makefile* (Shell script of a
>> sort). It will probably just visit each directory (directory per service),
>> use protoc couple of times (one per language), create Packages if possible
>> along with the gateway and swagger. I can even call directly to the Gradle
>> plug-in when building the Java artifacts, but that's a bit hacky.
>>
>> 2. Use *Bazel* - googleapi use *bazel-tools *and *rules-go* to create
>> artifacts in Java and Go. But I couldn't find a plug-in that handles the
>> creation of the gRPC gateway or the Swagger. I did find another repository
>> that have couple of Bazel rules called "rules_proto" (
>> https://github.com/stackb/rules_proto) that do have support in Swagger
>> and gRPC gateway - but it wasn't able to make it work out of the box (but I
>> didn't fully debugged what went wrong yet). So Bazel is an option, but it
>> feels like it's not really a mature solution, as it require tailoring a
>> specific setup - and it's not streamlined yet (for example, there no way to
>> create a single JAR with dep-tree between services, only jar per
>> micro-service).
>>
>> So before I'm investing more time writing an external Makefile, or
>> tweaking/writing Bazel plugins - I figured I'll come ask here - because I
>> probably not the only one trying to do something like that.
>>
>> Thank you!
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to grpc-io+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/grpc-io/7c4408c7-ae28-4aac-8133-78a933342c9f%40googlegroups.com
>

[grpc-io] Relationship between CompletionQueues and TheadManagers in sync servers

2019-08-22 Thread terence . lehuuphuong
Hello everyone,

I am trying to understand the general architecture of the underlying 
architecture of the* sync server* threading logic, and especially the way 
completion queues and thread managers work together.

>From wandering on the internet I keep swinging between different ideas:

My first understanding was that each CompletionQueues have a pollset (all 
the fds in the pollset binds to the same port (SO_REUSEPORT)) and everytime 
it receives an event, it hands it out to a thread from the thread manager.

But the PollForWork done by threads in the thread manager made me rethink 
this ^ basic approach. Threads could do the actual polling, waiting for 
requests from clients and then push requests into the CQ with its tag. CQ 
thread would then unqueue the request and handle it.

I saw a couple of projects using async server and using multiple completion 
queues with only on thread per queue, while the sync server architecture 
use a threads pool per queue.

So in a nutshell, I am wondering:


   - Who is doing the actual polling and reading the RPC requests from 
   clients, the ThreadManager (and its threads) through PollForWork(), or the 
   CompletionQueues ?


   - Is the CQ running in a different thread from the TManager ? I saw that 
   there is a ThreadManager per CQ.


   - Why do we need multiple CQs ?


Thank you for your time,

Regards,
Terence

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/db6cf6c4-c25b-480e-8911-98c3d564b7b4%40googlegroups.com.