Re: [grpc-io] C++ - multi-vendor gRPC -C++ "dial-out" collector

2022-06-15 Thread Salvatore Cuzzilla
Hi Richard,

the project in now open to everyone, today I officially went public: 
https://github.com/scuzzilla/mdt-dialout-collector .
As you will see, it's still in an early development stage, however already 
usable.

More features & stability improvements will come in the next weeks, 
meanwhile any technical contribution is really appreciated.


Regards,
Salvatore.

On Wednesday, June 15, 2022 at 7:40:59 PM UTC+2 rbel...@google.com wrote:

> > Nowadays, all big players (Cisco, Huawei, Juniper, Nokia ...) support 
> "yang" to model the data & gRPC with multiple encoding (JSON, GPB-KV, GPB) 
> to share it across the network.
>
> Interesting. I used to work in telecom until 2018. Back then, RESTConf 
> still seemed to be the dominant format. The telecom industry's penchant for 
> acronyms doesn't disappoint. I had to look up what GPB stands for even 
> though I work on the gRPC team.
>
> > This is why I recently started the development of a gRPC-C++ "dial-out" 
> collector & I was asking myself if someone else is already working on 
> something similar or might be interested in joining the project.
>
> I'm not aware of any existing system specifically for OSS gRPC. I imagine 
> you'd write a client interceptor that would log the initiation of an 
> outgoing connection, which would log the connection to a per-process 
> datastore. Then, a separate thread would periodically send a batch of 
> updates to an aggregation server. Depending on the scale of your system, 
> the design of the aggregation server could get tricky.
>
> On Wednesday, June 8, 2022 at 6:08:35 AM UTC-7 salvatore...@gmail.com 
> wrote:
>
>> Hi Community, 
>>
>> I was looking for an efficient way to collect metrics from a relatively 
>> big (>1000) & multi-vendor network.
>>
>> Nowadays, all big players (Cisco, Huawei, Juniper, Nokia ...) support 
>> "yang" to model the data & gRPC with multiple encoding (JSON, GPB-KV, GPB) 
>> to share it across the network.
>>
>> This is why I recently started the development of a gRPC-C++ "dial-out" 
>> collector & I was asking myself if someone else is already working on 
>> something similar or might be interested in joining the project.
>>
>> The development is done with C++ using the gRPC's async API plus 
>> multi-threading to maximize scalability.
>>
>>
>> Regards,
>> Salvatore.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/081a6b2e-c4a5-4183-800a-4b0779b07b19n%40googlegroups.com.


Re: [grpc-io] C++ - multi-vendor gRPC -C++ "dial-out" collector

2022-06-15 Thread 'Richard Belleville' via grpc.io
> Nowadays, all big players (Cisco, Huawei, Juniper, Nokia ...) support 
"yang" to model the data & gRPC with multiple encoding (JSON, GPB-KV, GPB) 
to share it across the network.

Interesting. I used to work in telecom until 2018. Back then, RESTConf 
still seemed to be the dominant format. The telecom industry's penchant for 
acronyms doesn't disappoint. I had to look up what GPB stands for even 
though I work on the gRPC team.

> This is why I recently started the development of a gRPC-C++ "dial-out" 
collector & I was asking myself if someone else is already working on 
something similar or might be interested in joining the project.

I'm not aware of any existing system specifically for OSS gRPC. I imagine 
you'd write a client interceptor that would log the initiation of an 
outgoing connection, which would log the connection to a per-process 
datastore. Then, a separate thread would periodically send a batch of 
updates to an aggregation server. Depending on the scale of your system, 
the design of the aggregation server could get tricky.

On Wednesday, June 8, 2022 at 6:08:35 AM UTC-7 salvatore...@gmail.com wrote:

> Hi Community, 
>
> I was looking for an efficient way to collect metrics from a relatively 
> big (>1000) & multi-vendor network.
>
> Nowadays, all big players (Cisco, Huawei, Juniper, Nokia ...) support 
> "yang" to model the data & gRPC with multiple encoding (JSON, GPB-KV, GPB) 
> to share it across the network.
>
> This is why I recently started the development of a gRPC-C++ "dial-out" 
> collector & I was asking myself if someone else is already working on 
> something similar or might be interested in joining the project.
>
> The development is done with C++ using the gRPC's async API plus 
> multi-threading to maximize scalability.
>
>
> Regards,
> Salvatore.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b01e7326-8f6b-495f-8a64-4bcf3de6eb55n%40googlegroups.com.


[grpc-io] C++ - multi-vendor gRPC -C++ "dial-out" collector

2022-06-08 Thread Salvatore Cuzzilla
Hi Community, 

I was looking for an efficient way to collect metrics from a relatively big 
(>1000) & multi-vendor network.

Nowadays, all big players (Cisco, Huawei, Juniper, Nokia ...) support 
"yang" to model the data & gRPC with multiple encoding (JSON, GPB-KV, GPB) 
to share it across the network.

This is why I recently started the development of a gRPC-C++ "dial-out" 
collector & I was asking myself if someone else is already working on 
something similar or might be interested in joining the project.

The development is done with C++ using the gRPC's async API plus 
multi-threading to maximize scalability.


Regards,
Salvatore.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/19a80663-abf0-4ae2-bce4-c2ac61d6e1ben%40googlegroups.com.