On Tue, Dec 31, 2019 at 7:23 AM Ron Wahler wrote:
>
> How much of the read behavior is in the golang underlying code on the read
> and how much is
> on the underlying OS driver that implements the behavior of the read. I
> understand the stream nature of the TCP connection and I handle that in m
If you are really interested in how the Go code relates to the underlying
socket system calls, the code is readily available. AFAICT, the
TCPConn.Read() call *on linux* eventually comes down to the unix version
poll.PD.Read(). See https://golang.org/src/internal/poll/fd_unix.go#L145.
Before tha
Also, you might find something like nats.io simplifies you’re effort. I’m not
sure if it supports extremely large message sizes off hand, but there is
probably a chunking layer available. Doing simple TCP messaging is fairly easy,
when you get into redundancy, fan-out, etc is can get complex fas
All of the source is there... but in general it is a bit more complex under the
covers than most - as it uses select/poll to know when a socket is ready for IO
then schedules the routine in the Read() to perform the IO. So Go has a bit of
its own kernel code than you would see in typical synchro
\\One other note, if you have a request / response type protocol with
fairly defined lengths, you don’t need a buffer larger than the largest
message if you don’t allow concurrent requests from the same client.
Yes, understood, that was not the constraint, I have to process an unknown
amount o
One other note, if you have a request / response type protocol with fairly
defined lengths, you don’t need a buffer larger than the largest message if you
don’t allow concurrent requests from the same client.
> On Dec 31, 2019, at 9:35 AM, Robert Engels wrote:
>
>
> The important factors: 1
The important factors: 1) the larger the buffer the fewer system calls that
need to be made 2) the larger the buffer the more memory use which can be
significant with many simultaneous connections
You need to remember that with correct socket options the kernel is already
buffering, and adding
Thanks for all the great responses.
How much of the read behavior is in the golang underlying code on the read
and how much is
on the underlying OS driver that implements the behavior of the read. I
understand the stream nature of the TCP connection and I handle that in my
code, I was just loo
Oh, and I don’t think SCTP is natively supported on Windows yet.
So your interoperability May vary...
> On Dec 30, 2019, at 4:17 PM, Robert Engels wrote:
>
>
> Im pretty sure I’m correct. It is a socket type not an option on TCP, which
> equates to a different protocol. If you use that opti
Im pretty sure I’m correct. It is a socket type not an option on TCP, which
equates to a different protocol. If you use that option you get a SCTP
transport not TCP.
> On Dec 30, 2019, at 4:06 PM, Bruno Albuquerque wrote:
>
>
> Although I am no expert in the subject, I would doubt this asse
Although I am no expert in the subject, I would doubt this assertion. It is
there in the socket man page in a Ubuntu machine with no mention of
anything specific being needed (other than the implicit fact that you need
a TCP stack that supports it, which should be true for any modern version
of Lin
That option requires proprietary protocols not standard tcp/udp.
> On Dec 30, 2019, at 12:04 PM, Bruno Albuquerque wrote:
>
>
> But, to complicate things, you can create what is basically a TCp connection
> with packet boundaries using SOCK_SEQPACKET (as opposed to SOCK_STREAM or
> SOCK_DGR
But, to complicate things, you can create what is basically a TCp
connection with packet boundaries using SOCK_SEQPACKET (as opposed to
SOCK_STREAM or SOCK_DGRAM).
On Mon, Dec 30, 2019 at 9:04 AM Jake Montgomery wrote:
> It sounds like maybe you have some misconceptions about TCP. It is a
> stre
ReadAll reads until buffer length or EOF.
> On Dec 30, 2019, at 11:04 AM, Jake Montgomery wrote:
>
>
> It sounds like maybe you have some misconceptions about TCP. It is a stream
> protocol, there are no data boundaries that are preserved. If send 20 bytes
> via TCP in a single call, it is
It sounds like maybe you have some misconceptions about TCP. It is a stream
protocol, there are no data boundaries that are preserved. If send 20 bytes
via TCP in a single call, it is *likely *that those 20 will arrive together
at the client. But it is *NOT guaranteed*. It is perfectly legitimat
Use read and expand the buffer as needed (or write the chunks to a file). If at
the end it is all going to be in memory, you might as well start with the very
large buffer. There is nothing special about Go in this regard - it’s standard
IO processing.
> On Dec 29, 2019, at 9:21 AM, Ron Wahler
Jake,
Thanks for the reply. Csrc.Read is what I was referring to as the
connection standard read, should not have used the words "standard read"
sorry about that. The problem I am trying to solve is reading an unknown
amount of byte data. I am trying to understand what triggers the
Csrc.Read(
On Friday, December 27, 2019 at 10:36:28 PM UTC-5, Ron Wahler wrote:
>
> I did look at ReadAll, but it won't return until it sees EOF. I am trying
> to find something that would return when the standard read would return. I
> get the memory part and would manage that. Any other ideas ?
>
Its un
The standard read will early return as soon as some of read is satisfied and a
subsequent block would occur (or timeout) - so you have to decide when you want
to stop reading...
>> On Dec 27, 2019, at 9:48 PM, Robert Engels wrote:
>
> You need a termination point. In the case of ReadString it
You need a termination point. In the case of ReadString it is the line
terminator. For an arbitrary read it is either a length or EOF - or you can
read until the underlying socket has no more data but this is generally useless
unless you are doing higher level buffering and protocol parsing. Thi
I did look at ReadAll, but it won't return until it sees EOF. I am trying
to find something that would return when the standard read would return. I
get the memory part and would manage that. Any other ideas ?
thanks,
Ron
On Friday, December 27, 2019 at 5:11:42 PM UTC-7, Ron Wahler wrote:
>
>
21 matches
Mail list logo