[go-nuts] Re: atomic bugs

2017-03-19 Thread T L


On Sunday, March 19, 2017 at 3:03:21 AM UTC+8, T L wrote:
>
> At the end of sync/atomic package docs, it says
>
> On x86-32, the 64-bit functions use instructions unavailable before the 
> Pentium MMX. 
>
>
> On non-Linux ARM, the 64-bit functions use instructions unavailable before 
> the ARMv6k core. 
>
>
> So when running Go programs which call the 64-bit atomic functions on 
> above mentioned machines, programs will crash?
>
>
> If it is true, is it good idea to add a compiler option to convert the 
> 64-bit function calls to mutex calls?
>
> And is it possible to do the conversions at run time?
>
>
> And I read from somewhere which says Go authors are some regretted to 
> expose the atomic functions,
>
> for these functions are intended to be used in standard packages 
> internally.
>
> So is it a good idea to recommend gophers to use mutex over atomic and 
> convert some mutex calls to atomic calls atomically by compiler?
>

sorry, here "atomically", I mean "automatically".

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Aren't package declarations (and per file imports) redundant?

2017-03-19 Thread pierre . curto
Besides what has been said, another use for the package declaration is 
being able to declare "main" programs that are ignored at build time but 
used by go generate within the package.

A good example of this is in the gob package:
https://golang.org/src/encoding/gob/dec_helpers.go is generated by 
running https://golang.org/src/encoding/gob/decgen.go invoked by go 
generate in https://golang.org/src/encoding/gob/decode.go.


Le samedi 18 mars 2017 12:49:57 UTC+1, Sunder Rajan Swaminathan a écrit :
>
> Before anyone flames, I love Go! There. Ok, now to the issue at hand -- 
>
> The toolchain already seems to understand the directory layout then why 
> bother littering the sources with package declaration? Also is there a 
> point to specifying imports at a file level? I mean doesn't the linker 
> bring in symbols at a package level anyway? My reason for brining this up 
> is I'm trying to generate a codebase using a custom built specification and 
> having to constantly tweak the imports and packages (in over 200 files) are 
> getting in my way of a smooth development. I'm sure others have had the 
> same problem.
>
> In the spirit of brining a solution and not just a problem, how about the 
> toolchain assume a package to be "main" if there's a main function therein. 
> Imports could be specified at the package level like in D or Rust in a 
> separate file.
>
> Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Different latency inside and outside

2017-03-19 Thread Konstantin Shaposhnikov
Hi,

External measurements probably show more acurate picture.

First of all internal latency numbers only include time spent doing actual 
work but don't include HTTP parsing (by net/http) and network overhead.

Secondly latency measured internally always looks better because it doesn't 
include application stalls that happened outside of the measured code. 
Imagine that it takes 10ms for net/http to parse the request (e.g. due to 
STW pause). and 1ms to run the handler. The real request latency is 11ms in 
this case by if measured internally it is only 1ms. This is known as 
coordinated omission. 

I recommend to watch this video for lots of useful information about 
latency measurement: https://www.youtube.com/watch?v=lJ8ydIuPFeU

Konstantin

On Saturday, 18 March 2017 19:52:21 UTC, Alexander Petrovsky wrote:
>
> Hello!
>
> Colleagues, I need your help!
>
> And so, I have the application, that accept through http (fasthttp) 
> dynamic json, unmarshal it to the map[string]interface{} using ffjson, 
> after that some fields reads into struct, then using this struct I make 
> some calculations, and then struct fields writes into 
> map[string]interface{}, this map writes to kafka (asynchronous), and 
> finally the result reply to client through http. Also, I have 2 caches, one 
> contains 100 millions and second 20 millions items, this caches build using 
> freecache to avoid slow GC pauses. Incoming rate is 4k rps per server 
> (5 servers at all), total cpu utilisation about 15% per server.
>
> The problem — my latency measurements show me that inside application 
> latency significantly less then outside.
> 1. How I measure latency?
> - I've add timings into http function handlers, and after that make 
> graphs.
> 2. How I understood that latency inside application significantly less 
> then outside?
> - I'm installed in front of my application the nginx server and log 
> $request_time, $upstream_response_time, after that make graphs too.
>
> It graphs show me that inside application latency is about 500 
> microseconds in 99 percentile, and about 10-15 milliseconds outside 
> (nginx). The nginx and my app works on the same server. My graphs show me 
> that GC occur every 30-40 seconds, and works less then 3 millisecond.
>
>
> 
>
>
> 
>
>
> Could someone help me find the problem and profile my application?
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Different latency inside and outside

2017-03-19 Thread Alexander Petrovsky
Hello, Dave!

воскресенье, 19 марта 2017 г., 3:28:13 UTC+3 пользователь David 
Collier-Brown написал:
>
> Are you seeing the average response time / latency of the cache from 
> outside? 
>

I don't calculate average, I'm using percentiles! Looks like the "cache" 
don't affect at all, otherwise I'll seen that on my graphs, since I'm 
calling my cache inside http handler between timings.
 

> If so, you should see lots of really quick responeses, and a few ones as 
> slow as inside that average to what you're seeing.
>

No, as I said, I'm using only percentiles, not average.
 

>
> --dave
>
>
> On Saturday, March 18, 2017 at 3:52:21 PM UTC-4, Alexander Petrovsky wrote:
>>
>> Hello!
>>
>> Colleagues, I need your help!
>>
>> And so, I have the application, that accept through http (fasthttp) 
>> dynamic json, unmarshal it to the map[string]interface{} using ffjson, 
>> after that some fields reads into struct, then using this struct I make 
>> some calculations, and then struct fields writes into 
>> map[string]interface{}, this map writes to kafka (asynchronous), and 
>> finally the result reply to client through http. Also, I have 2 caches, one 
>> contains 100 millions and second 20 millions items, this caches build using 
>> freecache to avoid slow GC pauses. Incoming rate is 4k rps per server 
>> (5 servers at all), total cpu utilisation about 15% per server.
>>
>> The problem — my latency measurements show me that inside application 
>> latency significantly less then outside.
>> 1. How I measure latency?
>> - I've add timings into http function handlers, and after that make 
>> graphs.
>> 2. How I understood that latency inside application significantly less 
>> then outside?
>> - I'm installed in front of my application the nginx server and log 
>> $request_time, $upstream_response_time, after that make graphs too.
>>
>> It graphs show me that inside application latency is about 500 
>> microseconds in 99 percentile, and about 10-15 milliseconds outside 
>> (nginx). The nginx and my app works on the same server. My graphs show me 
>> that GC occur every 30-40 seconds, and works less then 3 millisecond.
>>
>>
>> 
>>
>>
>> 
>>
>>
>> Could someone help me find the problem and profile my application?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Different latency inside and outside

2017-03-19 Thread Jesper Louis Andersen
My approach is usually this:

When a problem like this occurs, I very quickly switch from random guessing
at what the problem can be into a mode where I try to verify the mental
model I have of the system. Your mental model is likely wrong, and thus it
is leading you astray in what the problem might be. So I start devising
metrics that can support the mental model I have. Often, when your model is
corrected, you start understanding the pathology of the system. I tend to
start from the bottom and work up through the layers, trying to verify in
each layer that I'm seeing behavior that isn't out of the ordinary from the
mental model I have.

* At 4000 req/s, we are implicitly assuming that each request look the
same. Otherwise that is a weak metric as an indicator of system behavior.
Are they the same and take the same work? If we log the slowest request
every 5 seconds, what does it look like compared to one of the typical ones.
* The 99th percentile ignores the 40 slowest queries. What does the 99.9,
9.99, ... and max percentiles look like?
* What lies between the external measurement and the internal measurement?
Can we inject a metric for each of those?
* The operating system and environment is only doing work for us, and not
for someone else because it is virtualized, or some other operation is
running.
* There is enough bandwidth.
* Caches have hit/miss rates that looks about right.
* The cache also caches negative responses. That is, if an element is not
present in the backing store, a lookup in the cache will not fail on
repeated requests and go the said backing store.
* 15% CPU load means we are spending ample amounts of time waiting. What
are we waiting on? Start measuring foreign support systems further down the
chain. Don't trust your external partners. Especially if they are a network
connection away. What are the latencies for the waiting down the line?
* Are we measuring the right thing in the internal measurements? If the
window between external/internal is narrow, then chances are we are doing
the wrong thing on the internal side.


Google's SRE handbook mentions the 4 "golden" metrics. If nothing else,
measuring those on a system can often tell you if it is behaving or not.

On Sun, Mar 19, 2017 at 3:47 PM Alexander Petrovsky 
wrote:

> Hello, Dave!
>
> воскресенье, 19 марта 2017 г., 3:28:13 UTC+3 пользователь David
> Collier-Brown написал:
>
> Are you seeing the average response time / latency of the cache from
> outside?
>
>
> I don't calculate average, I'm using percentiles! Looks like the "cache"
> don't affect at all, otherwise I'll seen that on my graphs, since I'm
> calling my cache inside http handler between timings.
>
>
> If so, you should see lots of really quick responeses, and a few ones as
> slow as inside that average to what you're seeing.
>
>
> No, as I said, I'm using only percentiles, not average.
>
>
>
> --dave
>
>
> On Saturday, March 18, 2017 at 3:52:21 PM UTC-4, Alexander Petrovsky wrote:
>
> Hello!
>
> Colleagues, I need your help!
>
> And so, I have the application, that accept through http (fasthttp)
> dynamic json, unmarshal it to the map[string]interface{} using ffjson,
> after that some fields reads into struct, then using this struct I make
> some calculations, and then struct fields writes into
> map[string]interface{}, this map writes to kafka (asynchronous), and
> finally the result reply to client through http. Also, I have 2 caches, one
> contains 100 millions and second 20 millions items, this caches build using
> freecache to avoid slow GC pauses. Incoming rate is 4k rps per server
> (5 servers at all), total cpu utilisation about 15% per server.
>
> The problem — my latency measurements show me that inside application
> latency significantly less then outside.
> 1. How I measure latency?
> - I've add timings into http function handlers, and after that make
> graphs.
> 2. How I understood that latency inside application significantly less
> then outside?
> - I'm installed in front of my application the nginx server and log
> $request_time, $upstream_response_time, after that make graphs too.
>
> It graphs show me that inside application latency is about 500
> microseconds in 99 percentile, and about 10-15 milliseconds outside
> (nginx). The nginx and my app works on the same server. My graphs show me
> that GC occur every 30-40 seconds, and works less then 3 millisecond.
>
>
> 
>
>
> 
>
>
> Could someone help me find the problem and profile my application?
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more option

[go-nuts] Re: Different latency inside and outside

2017-03-19 Thread Alexander Petrovsky
Hello, Konstantin!

воскресенье, 19 марта 2017 г., 14:19:36 UTC+3 пользователь Konstantin 
Shaposhnikov написал:
>
> Hi,
>
> External measurements probably show more acurate picture.
>

Of course!
 

>
> First of all internal latency numbers only include time spent doing actual 
> work but don't include HTTP parsing (by net/http) and network overhead.
>
 
Yep, I'm absolutely agree with you, but I'm don't use net/http, I'm using 
fasthttp (jfyi). I don't believe that HTTP parsing may occur more then 
microseconds, network overhead on local machine insignificantly small!
 

> Secondly latency measured internally always looks better because it 
> doesn't include application stalls that happened outside of the measured 
> code.
>

Agree!
 

> Imagine that it takes 10ms for net/http to parse the request (e.g. due to 
> STW pause). and 1ms to run the handler. The real request latency is 11ms in 
> this case by if measured internally it is only 1ms. This is known as 
> coordinated omission. 
>

As I said earlier, I don't believe that http parsing and other "run-time" 
stuff may takes 10ms, it's unacceptable for that! By example, let's 
suppose, this situation takes place, why I don't see similar spikes in both 
graphs (nginx latency, myapp latency), but with different time order? Here 
part of my nginx log sampling:

# cat access.log-20170318 | grep "17/Mar/2017:03:42:17" | awk '{ print 
> $15,$16 }' | sort | uniq -c
>2056 0.000 0.000
> 200 0.001 0.000
>1313 0.001 0.001
>   3 0.002 0.001
>   9 0.002 0.002
>   5 0.003 0.003
>   3 0.004 0.004
>   4 0.005 0.005
>   5 0.006 0.006
>   4 0.007 0.007
>   2 0.008 0.007
>   5 0.008 0.008
>   1 0.009 0.009


As you can see, your hypothesis is not true, more then 99 percent of 
requests is really fast and occur less the 1 millisecond! And I try to find 
our what happens in this 1 percent!
 

> I recommend to watch this video for lots of useful information about 
> latency measurement: https://www.youtube.com/watch?v=lJ8ydIuPFeU
>

I'm start watching this video, thanks. One thing that I want to share, I'm 
agree that measure the latency only inside my handler function is not 
right, and the main question - how can I measure the latency in other parts 
of my application? This is main question in this topic!
 

>
>
> Konstantin
>
> On Saturday, 18 March 2017 19:52:21 UTC, Alexander Petrovsky wrote:
>>
>> Hello!
>>
>> Colleagues, I need your help!
>>
>> And so, I have the application, that accept through http (fasthttp) 
>> dynamic json, unmarshal it to the map[string]interface{} using ffjson, 
>> after that some fields reads into struct, then using this struct I make 
>> some calculations, and then struct fields writes into 
>> map[string]interface{}, this map writes to kafka (asynchronous), and 
>> finally the result reply to client through http. Also, I have 2 caches, one 
>> contains 100 millions and second 20 millions items, this caches build using 
>> freecache to avoid slow GC pauses. Incoming rate is 4k rps per server 
>> (5 servers at all), total cpu utilisation about 15% per server.
>>
>> The problem — my latency measurements show me that inside application 
>> latency significantly less then outside.
>> 1. How I measure latency?
>> - I've add timings into http function handlers, and after that make 
>> graphs.
>> 2. How I understood that latency inside application significantly less 
>> then outside?
>> - I'm installed in front of my application the nginx server and log 
>> $request_time, $upstream_response_time, after that make graphs too.
>>
>> It graphs show me that inside application latency is about 500 
>> microseconds in 99 percentile, and about 10-15 milliseconds outside 
>> (nginx). The nginx and my app works on the same server. My graphs show me 
>> that GC occur every 30-40 seconds, and works less then 3 millisecond.
>>
>>
>> 
>>
>>
>> 
>>
>>
>> Could someone help me find the problem and profile my application?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: atomic bugs

2017-03-19 Thread Michael Jones
In general is not not so much "will crash" but "will not run"

On Sun, Mar 19, 2017 at 12:52 AM, T L  wrote:

>
>
> On Sunday, March 19, 2017 at 3:03:21 AM UTC+8, T L wrote:
>>
>> At the end of sync/atomic package docs, it says
>>
>> On x86-32, the 64-bit functions use instructions unavailable before the
>> Pentium MMX.
>>
>>
>> On non-Linux ARM, the 64-bit functions use instructions unavailable
>> before the ARMv6k core.
>>
>>
>> So when running Go programs which call the 64-bit atomic functions on
>> above mentioned machines, programs will crash?
>>
>>
>> If it is true, is it good idea to add a compiler option to convert the
>> 64-bit function calls to mutex calls?
>>
>> And is it possible to do the conversions at run time?
>>
>>
>> And I read from somewhere which says Go authors are some regretted to
>> expose the atomic functions,
>>
>> for these functions are intended to be used in standard packages
>> internally.
>>
>> So is it a good idea to recommend gophers to use mutex over atomic and
>> convert some mutex calls to atomic calls atomically by compiler?
>>
>
> sorry, here "atomically", I mean "automatically".
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Michael T. Jones
michael.jo...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Different latency inside and outside

2017-03-19 Thread Alexander Petrovsky
Hello, Jesper! Nice to see not only in erlang community!


воскресенье, 19 марта 2017 г., 18:09:17 UTC+3 пользователь Jesper Louis 
Andersen написал:
>
> My approach is usually this:
>
> When a problem like this occurs, I very quickly switch from random 
> guessing at what the problem can be into a mode where I try to verify the 
> mental model I have of the system. Your mental model is likely wrong, and 
> thus it is leading you astray in what the problem might be. So I start 
> devising metrics that can support the mental model I have. Often, when your 
> model is corrected, you start understanding the pathology of the system. I 
> tend to start from the bottom and work up through the layers, trying to 
> verify in each layer that I'm seeing behavior that isn't out of the 
> ordinary from the mental model I have.
>

I'm absolutely agree with you on that. The first put forward a hypotysis, 
and then try to confirm or disapprove hypothesis! The problem is, I have no 
more hypothesis!
 

>
> * At 4000 req/s, we are implicitly assuming that each request look the 
> same. Otherwise that is a weak metric as an indicator of system behavior. 
> Are they the same and take the same work? If we log the slowest request 
> every 5 seconds, what does it look like compared to one of the typical ones.
>

The all requests is the same, and have the same behavior! I'm log all 
requests and they all similar.
 

> * The 99th percentile ignores the 40 slowest queries. What does the 99.9, 
> 9.99, ... and max percentiles look like?
>

I'v have no answer to this question. And I don't know how it can help me?
 

> * What lies between the external measurement and the internal measurement? 
> Can we inject a metric for each of those?
>

Yep, it's the also the main question! I'm log and graph nginx 
$request_time, and log and graph internal function time. What is between, I 
can't log, it's:
 - local network (TCP);
 - work in kernel/user space;
 - golang GC and other run-time;
 - golang fasthttp machinery before call my http handler.
 

> * The operating system and environment is only doing work for us, and not 
> for someone else because it is virtualized, or some other operation is 
> running.
>

Only for us! There is no other application that can impact on my 
application performance!
 

> * There is enough bandwidth.
>

Looks like bandwidth is enough, this show me my graphs. And as I know, 
local network inside the one server can't affect application performance so 
much.
 

> * Caches have hit/miss rates that looks about right.
>

In my application this is not true caches, it real it's dictionary loaded 
from database, and user in calculation.
 

> * The cache also caches negative responses. That is, if an element is not 
> present in the backing store, a lookup in the cache will not fail on 
> repeated requests and go the said backing store.
>

- my answer earlier - )
 

> * 15% CPU load means we are spending ample amounts of time waiting. What 
> are we waiting on?
>

Maybe, or maybe the 32 core can process the 4k rps. How can I find out, 
what my app is waiting on?
 

> Start measuring foreign support systems further down the chain. Don't 
> trust your external partners. Especially if they are a network connection 
> away. What are the latencies for the waiting down the line?
>

Yep, I measure latency on my side, using nginx, and log $request_time and 
graph it after that.
 

> * Are we measuring the right thing in the internal measurements? If the 
> window between external/internal is narrow, then chances are we are doing 
> the wrong thing on the internal side.
>

Could you explain this?
 

>
> Google's SRE handbook mentions the 4 "golden" metrics. If nothing else, 
> measuring those on a system can often tell you if it is behaving or not.
>
> On Sun, Mar 19, 2017 at 3:47 PM Alexander Petrovsky  > wrote:
>
>> Hello, Dave!
>>
>> воскресенье, 19 марта 2017 г., 3:28:13 UTC+3 пользователь David 
>> Collier-Brown написал:
>>
>>> Are you seeing the average response time / latency of the cache from 
>>> outside? 
>>>
>>
>> I don't calculate average, I'm using percentiles! Looks like the "cache" 
>> don't affect at all, otherwise I'll seen that on my graphs, since I'm 
>> calling my cache inside http handler between timings.
>>  
>>
>>> If so, you should see lots of really quick responeses, and a few ones as 
>>> slow as inside that average to what you're seeing.
>>>
>>
>> No, as I said, I'm using only percentiles, not average.
>>  
>>
>>>
>>> --dave
>>>
>>>
>>> On Saturday, March 18, 2017 at 3:52:21 PM UTC-4, Alexander Petrovsky 
>>> wrote:

 Hello!

 Colleagues, I need your help!

 And so, I have the application, that accept through http (fasthttp) 
 dynamic json, unmarshal it to the map[string]interface{} using ffjson, 
 after that some fields reads into struct, then using this struct I make 
 some calculations, and then struct fields writes into 
 map[string]interface{}, this map write

[go-nuts] Re: Different latency inside and outside

2017-03-19 Thread Tamás Gulácsi
As fasthttp does not even follow the specs, you cannot assume that all requests 
are parsed the same, till you prove it.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Different latency inside and outside

2017-03-19 Thread Alexander Petrovsky
As I know it's doesn't matter, before I've used net/http, and the situation 
doesn't change, except the count of allocations, they are reduced

Could you point please where fasthttp doesn't follow the specs?

воскресенье, 19 марта 2017 г., 20:00:25 UTC+3 пользователь Tamás Gulácsi 
написал:
>
> As fasthttp does not even follow the specs, you cannot assume that all 
> requests are parsed the same, till you prove it.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Different latency inside and outside

2017-03-19 Thread Jesper Louis Andersen
On Sun, Mar 19, 2017 at 4:58 PM Alexander Petrovsky 
wrote:



* The 99th percentile ignores the 40 slowest queries. What does the 99.9,
9.99, ... and max percentiles look like?


I'v have no answer to this question. And I don't know how it can help me?


Usually, the maximum latency is a better indicator of trouble than a 99th
percentile in my experience. If you improve the worst case, then surely the
other cases are likely to follow. However, there are situations where this
will hurt the median 50th percentile latency. Usually this trade-off is
okay, but there are a few situations where it might not be.


Yep, it's the also the main question! I'm log and graph nginx
$request_time, and log and graph internal function time. What is between, I
can't log, it's:
 - local network (TCP);
 - work in kernel/user space;
 - golang GC and other run-time;
 - golang fasthttp machinery before call my http handler.


The kernel and GC can be dynamically inspected. I'd seriously consider
profiling as well in a laboratory environment. Your hypothesis is that none
of these have a discrepancy, but they may have.



* Caches have hit/miss rates that looks about right.


In my application this is not true caches, it real it's dictionary loaded
from database, and user in calculation.


Perhaps the code in https://godoc.org/golang.org/x/text is of use for this?
It tends to be faster than maps because it utilizes compact string
representations and tries. Of course, it requires you show that the problem
is with the caching sublayer first.


* 15% CPU load means we are spending ample amounts of time waiting. What
are we waiting on?


Maybe, or maybe the 32 core can process the 4k rps. How can I find out,
what my app is waiting on?


blockprofile is my guess at what I would grab first. Perhaps the tracing
functionality as well. You can also adds metrics on each blocking point in
order to get an idea of where the system is going off. Functionality like
dtrace would be nice, but I'm not sure Go has it, unfortunately.




* Are we measuring the right thing in the internal measurements? If the
window between external/internal is narrow, then chances are we are doing
the wrong thing on the internal side.


Could you explain this?


There may be a bug in the measurement code, so you should probably go over
it again. One common fault of mine is to place the measurement around the
wrong functions, so I think they are detecting more than they are. A single
regular expression that is only hit in corner-cases can be enough to mess
with a performance profile. Another common mistake is to not have a
appropriate decay parameter on your latency measurements, so older requests
eventually gets removed from the latency graph[0]

In general, as the amount of work a system processes goes up, it gets more
sensitive to fluctuations in latency. So even at a fairly low CPU load, you
may still have some spiky behavior hidden by a smoothing of the CPU load
measure, and this can contribute to added congestion.

[0] A decaying Vitter's algorithm R implementation, or Tene's HdrHistogram
is preferable. HdrHistogram is interesting in that it uses a floating-point
representation for its counters: one array for exponents, one array for
mantissa. It allows very fast accounting (nanoseconds) and provides precise
measurements around 0 at the expense of precision at, say, 1 hour. It is
usually okay because if you waited 1 hour, you don't care if it was really
1 hour and 3 seconds. But at 1us, you really care about being precise.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Different latency inside and outside

2017-03-19 Thread Konstantin Shaposhnikov

>
>
> As you can see, your hypothesis is not true, more then 99 percent of 
> requests is really fast and occur less the 1 millisecond! And I try to find 
> our what happens in this 1 percent!
>  
>

I was probably not clear enough with my explanation. In 99% of cases 
net/http (or fasthttp) parsing will be very fast (a few micros) and won't 
add much to the internally measured latency. However in 1% of cases there 
could be a GC stop the world pause or go runtime decides to use the request 
goroutine to assist GC or some sub-optimal scheduling decision or I/O and 
the request will take longer but this will never be reflected in the 
measured time.

https://golang.org/cmd/trace/ can be used to find out what is happening 
inside a running Go application. If you capture a trace during interval 
with request(s) taking more time that usual then you will be able to find 
out what exactly takes so long (syscalls, scheduler, GC, etc).

Also note that there are still a few latency related bugs in Go runtime. 
E.g. https://github.com/golang/go/issues/14812, 
https://github.com/golang/go/issues/18155, 
https://github.com/golang/go/issues/18534

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Different latency inside and outside

2017-03-19 Thread Tamás Gulácsi
https://github.com/valyala/fasthttp/blob/master/README.md FAQ says "net/http 
handles more HTTP corner cases".
For me, that means not following the specs.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Why were tabs chosen for indentation?

2017-03-19 Thread Carl
Hi,

This is a question to whoever decided that go will use tabs - team or 
person:

Could you please explain your reasoning behind the decision?

So far, all my googling has just turned up the what and not the why:

States that tabs are to be used:
cmd/gofmt: remove -tabs and -tabwidth flags 

Command gofmt 

Asks the question, but quickly gets off topic without answering it:
Why does go fmt use an 8 space indent 


So my question is why were tabs chosen? 
I have no preference for tabs vs anything else, but I do respect the go 
team and the language designers and would really like to know the thinking 
behind the decision.

Cheers,
Carl

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why were tabs chosen for indentation?

2017-03-19 Thread Rob Pike
How wide should the indentation be? 2 spaces? 4? 8? Something else?

By making the indent be a tab, you get to decide the answer to that
question and everyone will see code indented as wide (or not) as they
prefer.

In short, this is what the tab character is for.

-rob


On Sun, Mar 19, 2017 at 1:50 PM, Carl  wrote:

> Hi,
>
> This is a question to whoever decided that go will use tabs - team or
> person:
>
> Could you please explain your reasoning behind the decision?
>
> So far, all my googling has just turned up the what and not the why:
>
> States that tabs are to be used:
> cmd/gofmt: remove -tabs and -tabwidth flags
> 
> Command gofmt 
>
> Asks the question, but quickly gets off topic without answering it:
> Why does go fmt use an 8 space indent
> 
>
> So my question is why were tabs chosen?
> I have no preference for tabs vs anything else, but I do respect the go
> team and the language designers and would really like to know the thinking
> behind the decision.
>
> Cheers,
> Carl
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why were tabs chosen for indentation?

2017-03-19 Thread Carl
Exactly what I was looking for. Thank you!

On Monday, March 20, 2017 at 10:36:06 AM UTC+13, Rob 'Commander' Pike wrote:
>
> How wide should the indentation be? 2 spaces? 4? 8? Something else?
>
> By making the indent be a tab, you get to decide the answer to that 
> question and everyone will see code indented as wide (or not) as they 
> prefer.
>
> In short, this is what the tab character is for.
>
> -rob
>
>
> On Sun, Mar 19, 2017 at 1:50 PM, Carl > 
> wrote:
>
>> Hi,
>>
>> This is a question to whoever decided that go will use tabs - team or 
>> person:
>>
>> Could you please explain your reasoning behind the decision?
>>
>> So far, all my googling has just turned up the what and not the why:
>>
>> States that tabs are to be used:
>> cmd/gofmt: remove -tabs and -tabwidth flags 
>> 
>> Command gofmt 
>>
>> Asks the question, but quickly gets off topic without answering it:
>> Why does go fmt use an 8 space indent 
>> 
>>
>> So my question is why were tabs chosen? 
>> I have no preference for tabs vs anything else, but I do respect the go 
>> team and the language designers and would really like to know the thinking 
>> behind the decision.
>>
>> Cheers,
>> Carl
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Expression evaluation with side effects

2017-03-19 Thread Jan Mercl
While trying to resolve a failing (C) test case[0] I encountered a (Go)
behavior I do not understand. This code[1]

package main

import (
"fmt"
)

var (
x  = [1]int{2}
x2 = [1]int{2}
)

func foo() int {
x[0] |= 128
return 1
}

func foo2() int {
x2[0] |= 128
return 1
}

func main() {
x[0] |= foo()
fmt.Println(x[0])
v := x2[0] | foo2()
fmt.Println(v)
}

outputs:

3
131

It seems to me that the two numbers should be the same. (?)

Thanks in advance to anyone enlightening me.

  [0]:
https://github.com/gcc-mirror/gcc/blob/4107202e2b8f814f4c63a61b043cfb36a3798de3/gcc/testsuite/gcc.c-torture/execute/pr58943.c
  [1]: https://play.golang.org/p/fGibPFuejQ

-- 

-j

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Expression evaluation with side effects

2017-03-19 Thread Steven Hartland
Think the following section should explain the strange behaviour you're 
seeing:

https://golang.org/ref/spec#Order_of_evaluation

On 19/03/2017 22:59, Jan Mercl wrote:
While trying to resolve a failing (C) test case[0] I encountered a 
(Go) behavior I do not understand. This code[1]


package main
import (
"fmt"
)
var (
x  = [1]int{2}
x2 = [1]int{2}
)
func foo() int {
x[0] |= 128
return 1
}
func foo2() int {
x2[0] |= 128
return 1
}
func main() {
x[0] |= foo()
fmt.Println(x[0])
v := x2[0] | foo2()
fmt.Println(v)
}
outputs:

3
131

It seems to me that the two numbers should be the same. (?)

Thanks in advance to anyone enlightening me.

  [0]: 
https://github.com/gcc-mirror/gcc/blob/4107202e2b8f814f4c63a61b043cfb36a3798de3/gcc/testsuite/gcc.c-torture/execute/pr58943.c

  [1]: https://play.golang.org/p/fGibPFuejQ

--

-j

--
You received this message because you are subscribed to the Google 
Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to golang-nuts+unsubscr...@googlegroups.com 
.

For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why were tabs chosen for indentation?

2017-03-19 Thread Tim K
gofmt documentation says:

Gofmt formats Go programs. It uses tabs (*width = 8*) for indentation and 
> blanks for alignment.
>

https://golang.org/cmd/gofmt/

Just curious, any reason why it needs to specify the tab width = 8? Should 
that be removed if it's not relevant?

Thanks!


On Sunday, March 19, 2017 at 2:36:06 PM UTC-7, Rob 'Commander' Pike wrote:
>
> How wide should the indentation be? 2 spaces? 4? 8? Something else?
>
> By making the indent be a tab, you get to decide the answer to that 
> question and everyone will see code indented as wide (or not) as they 
> prefer.
>
> In short, this is what the tab character is for.
>
> -rob
>
>
> On Sun, Mar 19, 2017 at 1:50 PM, Carl > 
> wrote:
>
>> Hi,
>>
>> This is a question to whoever decided that go will use tabs - team or 
>> person:
>>
>> Could you please explain your reasoning behind the decision?
>>
>> So far, all my googling has just turned up the what and not the why:
>>
>> States that tabs are to be used:
>> cmd/gofmt: remove -tabs and -tabwidth flags 
>> 
>> Command gofmt 
>>
>> Asks the question, but quickly gets off topic without answering it:
>> Why does go fmt use an 8 space indent 
>> 
>>
>> So my question is why were tabs chosen? 
>> I have no preference for tabs vs anything else, but I do respect the go 
>> team and the language designers and would really like to know the thinking 
>> behind the decision.
>>
>> Cheers,
>> Carl
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why were tabs chosen for indentation?

2017-03-19 Thread 'Kevin Malachowski' via golang-nuts
I love that Go uses tabs because I use 3 spaces for my tabstop, and very few 
people share that preference.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why were tabs chosen for indentation?

2017-03-19 Thread Ian Davis
On Sun, 19 Mar 2017, at 09:35 PM, Rob Pike wrote:

> How wide should the indentation be? 2 spaces? 4? 8? Something else?

> 

> By making the indent be a tab, you get to decide the answer to that
> question and everyone will see code indented as wide (or not) as
> they prefer.
> 

> In short, this is what the tab character is for.



Please don't take this a criticism of the choice or of gofmt, but purely
an observation. It seems to me that this explanation is at odds with the
philosophy of gofmt which is that there is a single way to lay out code.
The benefits of that are obvious but using tabs erodes it somewhat when
you read code on another computer.


I always felt the reason for using tabs was to enable support for non-
monospaced fonts and multi-width characters. A tab stop in the
traditional sense is a linear position, not a number of characters.


Ian


-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why were tabs chosen for indentation?

2017-03-19 Thread Wojciech S. Czarnecki

> > On Sun, 19 Mar 2017, at 09:35 PM, Rob Pike wrote:
> > everyone will see code indented as wide (or not) as they prefer.

> Ian Davis  wrote:
> It seems to me that this explanation is at odds with the philosophy of
> gofmt which is that there is a single way to lay out code.

> The benefits of that are obvious but using tabs erodes it somewhat when
> you read code on another computer.

It is that person who prefers particular tab width who sees code on 'another'
computer. Gofmt makes code 'style' uniform for readability.
Forced tabs make code familiar with indentation one is
accustomed to.


> I always felt the reason for using tabs was to enable support for non-
> monospaced fonts and multi-width characters. A tab stop in the
> traditional sense is a linear position, not a number of characters.

-- 
Wojciech S. Czarnecki
   ^oo^ OHIR-RIPE

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Guetzli perceptual JPEG encoder for Go

2017-03-19 Thread chaishushan
good idea.

it is a big work.
i sugget use cgo as the start point: use Go implement some pure c function, 
then export as the c function.

for example:

//export ButteraugliScoreForQuality
func ButteraugliScoreForQuality(quality C.double) C.double {
// go code
}

在 2017年3月18日星期六 UTC+8下午4:32:47,Val写道:
>
> Thanks Chai!
> Do you think this is something we could translate to pure go, no requiring 
> cgo?
> I understand this would be a fair amount of work. I did a similar job 
> recently (translated some PVRTC stuff from c++ to go by copy-paste, then 
> fix everything), it went pretty well. I may try the same for Guetzli.
>
> Cheers
> Val
>
> On Friday, March 17, 2017 at 6:37:43 PM UTC+1, chais...@gmail.com wrote:
>>
>> https://github.com/chai2010/guetzli-go
>> https://github.com/google/guetzli
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] How to accept any function with one return value as a parameter

2017-03-19 Thread aktungmak
Hi,

I am trying to write a function that initializes a generic collection. For 
adding new items to the collection, the user specifies a constructor which 
takes one argument and returns a pointer to the struct that will be 
inserted in the collection, and also sets fields to initial values. For 
example:

func Constructor(id int) *NewItemX ...
func Constructor(id int) *NewItemY ...
func Constructor(id int) *NewItemZ ...

In the constructor for the collection, I want to accept any function which 
has this general form, although the return type will be different for each 
struct. At first, I thought this might work:

func NewCollection(itemCtr func(int) interface{}) *Collection ...

but of course, trying to pass Constructor fails during compilation since 
the types *NewItemX and interface{} do not match:

.\Collection_test.go:xx: cannot use NewItemX (type func(int) *NewItemX) as 
type func(int) interface {} in argument to NewCollection

I could just do this:

func NewCollection(itemCtr interface{}) *Collection

but then I would have to do some runtime checks using reflect to make sure 
that it is a func etc and I lose compile-time checking of types.

How can I express this best in go?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Different latency inside and outside

2017-03-19 Thread a . petrovsky


воскресенье, 19 марта 2017 г., 20:46:16 UTC+3 пользователь Jesper Louis 
Andersen написал:
>
> On Sun, Mar 19, 2017 at 4:58 PM Alexander Petrovsky  > wrote:
>
>>  
>>
> * The 99th percentile ignores the 40 slowest queries. What does the 99.9, 
>>> 9.99, ... and max percentiles look like?
>>>
>>
>> I'v have no answer to this question. And I don't know how it can help me?
>>
>
> Usually, the maximum latency is a better indicator of trouble than a 99th 
> percentile in my experience. If you improve the worst case, then surely the 
> other cases are likely to follow. However, there are situations where this 
> will hurt the median 50th percentile latency. Usually this trade-off is 
> okay, but there are a few situations where it might not be.
>

Got it!
 

>
>> Yep, it's the also the main question! I'm log and graph nginx 
>> $request_time, and log and graph internal function time. What is between, I 
>> can't log, it's:
>>  - local network (TCP);
>>  - work in kernel/user space;
>>  - golang GC and other run-time;
>>  - golang fasthttp machinery before call my http handler.
>>
>
> The kernel and GC can be dynamically inspected. I'd seriously consider 
> profiling as well in a laboratory environment. Your hypothesis is that none 
> of these have a discrepancy, but they may have.
>

If I'm understood your correctly, I think the problem is something there...
 

>  
>>
>>> * Caches have hit/miss rates that looks about right.
>>>
>>
>> In my application this is not true caches, it real it's dictionary loaded 
>> from database, and user in calculation.
>>
>
> Perhaps the code in https://godoc.org/golang.org/x/text is of use for 
> this? It tends to be faster than maps because it utilizes compact string 
> representations and tries. Of course, it requires you show that the problem 
> is with the caching sublayer first.
>

The dictionary is not words dictionary, the dictionary in database terms — 
some keys (ids), and values or multiple values.

>  
>
>> * 15% CPU load means we are spending ample amounts of time waiting. What 
>>> are we waiting on?
>>>
>>
>> Maybe, or maybe the 32 core can process the 4k rps. How can I find out, 
>> what my app is waiting on?
>>
>
> blockprofile is my guess at what I would grab first. Perhaps the tracing 
> functionality as well. You can also adds metrics on each blocking point in 
> order to get an idea of where the system is going off. Functionality like 
> dtrace would be nice, but I'm not sure Go has it, unfortunately.
>

Thanks a lot, I will!
 

>  
>
>>  
>>
>>> * Are we measuring the right thing in the internal measurements? If the 
>>> window between external/internal is narrow, then chances are we are doing 
>>> the wrong thing on the internal side.
>>>
>>
>> Could you explain this?
>>
>
> There may be a bug in the measurement code, so you should probably go over 
> it again. One common fault of mine is to place the measurement around the 
> wrong functions, so I think they are detecting more than they are. A single 
> regular expression that is only hit in corner-cases can be enough to mess 
> with a performance profile. Another common mistake is to not have a 
> appropriate decay parameter on your latency measurements, so older requests 
> eventually gets removed from the latency graph[0]
>  
> In general, as the amount of work a system processes goes up, it gets more 
> sensitive to fluctuations in latency. So even at a fairly low CPU load, you 
> may still have some spiky behavior hidden by a smoothing of the CPU load 
> measure, and this can contribute to added congestion.
>
> [0] A decaying Vitter's algorithm R implementation, or Tene's HdrHistogram 
> is preferable. HdrHistogram is interesting in that it uses a floating-point 
> representation for its counters: one array for exponents, one array for 
> mantissa. It allows very fast accounting (nanoseconds) and provides precise 
> measurements around 0 at the expense of precision at, say, 1 hour. It is 
> usually okay because if you waited 1 hour, you don't care if it was really 
> 1 hour and 3 seconds. But at 1us, you really care about being precise.
>

Could you explain please, what does you mean when you say - "Another common 
mistake is to not have a appropriate decay parameter on your latency 
measurements, so older requests eventually gets removed from the latency 
graph[0]"? Why the older requests should remove from latency graph? As I 
know the hdrHistogram is good fit for high precision measurements and 
graphs with different order and magnitude. How it can help me? 
 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Only build go binary from source

2017-03-19 Thread gruszczy
Hi gophers,

I am tinkering with some runtime code and I would like to build only the go 
binary to the test it on a small program I wrote. I don't see any script in 
the source that would allow that, all of these also try to compile the 
standard library. I would like to avoid that, because it's easier for me to 
test my changes first on smaller snippet of code. How can I build only the 
main binary (and where it is going to be available)?

Kind regards,

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Different latency inside and outside

2017-03-19 Thread a . petrovsky


воскресенье, 19 марта 2017 г., 21:15:25 UTC+3 пользователь Konstantin 
Shaposhnikov написал:
>
>
>> As you can see, your hypothesis is not true, more then 99 percent of 
>> requests is really fast and occur less the 1 millisecond! And I try to find 
>> our what happens in this 1 percent!
>>  
>>
>
> I was probably not clear enough with my explanation. In 99% of cases 
> net/http (or fasthttp) parsing will be very fast (a few micros) and won't 
> add much to the internally measured latency. However in 1% of cases there 
> could be a GC stop the world pause or go runtime decides to use the request 
> goroutine to assist GC or some sub-optimal scheduling decision or I/O and 
> the request will take longer but this will never be reflected in the 
> measured time.
>

Ack!
 

>
> https://golang.org/cmd/trace/ can be used to find out what is happening 
> inside a running Go application. If you capture a trace during interval 
> with request(s) taking more time that usual then you will be able to find 
> out what exactly takes so long (syscalls, scheduler, GC, etc).
>

Thanks, I'll try to use it for my purpose!
 

>
> Also note that there are still a few latency related bugs in Go runtime. 
> E.g. https://github.com/golang/go/issues/14812, 
> https://github.com/golang/go/issues/18155, 
> https://github.com/golang/go/issues/18534
>

Thanks, I'll will investigate them too!

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Go - language of the future!

2017-03-19 Thread Konstantin Khomoutov
On Sat, 18 Mar 2017 05:31:20 -0700 (PDT)
mhhc...@gmail.com wrote:

[...]
> For those which are language designers, comparing the language to
> such things
> like haskell is made in an attempt to make the best language design
> (small ego trip here ?),
> not the most practical, effective IRL language.
> IRL, we need both, a good language design to serve an efficient
> programming experience.
> But, as everything, too much this or that, this is not good.
[...]

I'm afraid you failed to grasp what J.L. Andersen so well formulated
(as usually), so here's my take at providing an executive summary :-)

"It's next to impossible to reliably predict what properties of
programming languages the future will require, and while Go has many
excellent properties, Go is not everything, and there are a number of
domains in which it won't shine or even fly at all."

To put it even simpler, Go is a very good fit to what it's a very good
fit right now, and in the near-term.  All speculations about future
are, well, pure speculations. ;-)

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.