[go-nuts] Re: Elastic synchronised logging : What do you think ?

2019-03-17 Thread Louki Sumirniy
I didn't even think of the idea of using buffered channels, I was trying to 
not lean too far in towards that side of thing, but it is good you mention 
it, it would be simple to just pre-allocate a buffer and trigger the print 
call only when that buffer fills up (say like half a screenful, maybe 4kb 
to allow for stupidly heavy logging outputs.

As you point out, there is some threads and buffering going on with the 
writer already.

I do think, though, that while on one hand you are correct the load is not 
really other than shifted, that on the other hand, the scheduler can more 
cleanly keep the threads separated. In my use case it is only one main 
thread processing a lot of crypto-heavy virtual machine stuff, and the most 
important thing that needs low overhead (in thread) diversions, not that 
the load is reduced.

As I mentioned, I wrote a logger that does this deferral of processing 
until two hops down stream to the root, I am going to look at buffering it 
and actually I think it would be good to actually try to pace it by time 
instead of line-printing speed, and when the heavy debug printing is being 
used performance is not a concern  at all - but being single thread, the 
less time that thread spends talking to other threads, the better, very 
short loops, under 100ms most of the time, making actual calls to log. or 
fmt.Print functions adds more overhead to the main thread than loading a 
channel and dispatching it.

On my main workstation, there is another 11 cores and they can be busy 
doing things without slowing the central process, so long as they aren't 
needing to synchronise with it or put the loop on hold longer than 
necessary. Using a buffer and a fast ticker to fill and empty bulk sends to 
the output sounds like a sensible idea to me, as slicing strings, it is 
easy to design it to flow as a stream, they only need one bottleneck as 
they are streamed into the buffer.

On Sunday, 17 March 2019 10:15:29 UTC+1, Christophe Meessen wrote:
>
> What you did is decouple production from consumption. You can speed up the 
> production go routine if the rate is irregular. But if, on average, 
> consumption is too slow, the list will grow until out of memory. 
>
> If you want to speed up consumption, you may group the strings in one big 
> string and print once. This will reduce the rate of system calls to print 
> each string individually.
>
> Something like this (warning: raw untested code)
>
>
> buf := make([]byte, 0, 1024)
> ticker := time.NewTicker(1*time.Second)
> for {
> select {
> case ticker.C:
> if len(buf) > 0 {
> fmt.Print(string(buf))
> buf := buf[:0]
> }
> case m := <-buffChan:
> buf = append(buf, m...)
> buf = append(buf, '\n')
> }
> }
>
>
> Adjust the ticker period time and initial buffer size to what matches your 
> need. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Elastic synchronised logging : What do you think ?

2019-03-17 Thread Louki Sumirniy
I just wrote this a bit over a month ago:

https://git.parallelcoin.io/dev/pod/src/branch/master/pkg/util/cl

It was brutal simple before (only one 600 line source lots of type 
switches) but it now also has a registration system for setting up 
arbitrary subsystem log levels.

By the way, I could be wrong in my thinking about this, but it was central 
to the design to use interface{} channels and meaningful type names and the 
variables and inputs for, such as a printf type function, are this way only 
bundled into a struct literal, no function is called, the execution thread 
just forks and log messages pass through directly without any function call.

I even have a type for closures to pass straight through and they also 
don't execute until the last step of composing the output, so I have 6 core 
goroutines and 6 for each subsystem I set up, and most of the work is done 
in the end of two passes through channels (subsystems drop them if so 
configured). 

I think it also has a somewhat crude shutdown handling system in it too. 

It could definitely do with more work and actual performance overhead in 
throughput and latency to be profiled and optimised, but being simple I 
think it's not got much room to improve in the general design.

The messages pass through two 'funnels', with one per subsystem gated by 
its level setting, and then the actual function calls to splice the strings 
and output the result is deferred until the final 6 channel level main 
thread. I have just used them mainly on a package level basis but you can 
declare several in a package they just have to have different string labels.

Anyway, I'm kinda proud of it because my specific need is for avoiding such 
in-thread overheads as possible since during replay of the database log it 
only runs one thread due to the dependencies between data elements, and I 
think this does this well but I can't say I have any measurements either 
way, except I do know that especially permanent allocated channels only 
have a startup cost and their scheduling is very low cost, and also 
explicates to the compiler that the processes are concurrent and ideally 
should not be overlapping each other.

The print functions in fmt and log all have the per-call overhead cost, 
whereas channels have initialisation and lower context switch cost, but 
yes, more memory use but mainly because it funnels messages from another 
couple hundred other threads. So that's also my theory and why this post 
caught my eye. My logger is unlicenced so do whatever you like except 
pretend it's not prior art.

On Saturday, 16 March 2019 17:02:06 UTC+1, Thomas S wrote:
>
> Hello,
>
> I have a software needing a lot of logs when processing.
>
> I need to :
> 1- Improve the processing time by doing non-blocking logs
> 2- Be safe with goroutines parallel logs
>
> fmt package doesn't match with (1) & (2)
> log package doesn't match with (1)
>
> *Here is my proposal. What do you think ?*
>
> *Design :*
>
> [Log functions] -channel>[Buffering function (goroutine)] 
> channel> [Printing function (goroutine)]
>
>
> package glog
>
> import (
> "container/list"
> "encoding/json"
> "fmt"
> )
>
> /*
> ** Public
>  */
>
> func Println(data ...interface{}) {
> bufferChan <- fmt.Sprintln(data...)
> }
>
> func Print(data ...interface{}) {
> bufferChan <- fmt.Sprint(data...)
> }
>
> func Printf(s string, data ...interface{}) {
> go func() {
> r := fmt.Sprintf(s, data...)
> bufferChan <- r
> }()
> }
>
> /*
> ** Private
>  */
>
> var bufferChan chan string
> var outChan chan string
>
> func init() {
> bufferChan = make(chan string)
> outChan = make(chan string)
> go centrale()
> go buffout()
> }
>
> func centrale() {
> var buff *list.List
> buff = list.New()
> for {
> if buff.Len() > 0 {
> select {
> case outChan <- buff.Front().Value.(string):
> buff.Remove(buff.Front())
> case tmp := <-bufferChan:
> buff.PushBack(tmp)
> }
>
> } else {
> tmp := <-bufferChan
> buff.PushBack(tmp)
> }
> }
> }
>
> func buffout() {
> for {
> data := <-outChan
> fmt.Print(data)
> }
> }
>
>
>
> It works well for now, I want to be sure to not miss anything as it's a 
> very important part of the code.
>
> Thank you for your review.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Elastic synchronised logging : What do you think ?

2019-03-17 Thread Christophe Meessen
Please replace 'case ticker.C' with 'case <-ticker.C' in my previous code. 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Elastic synchronised logging : What do you think ?

2019-03-16 Thread caijianyu
The solution you proposed is better now, but if the log produce speed is 
still too large,
I think the printf function should not use a new goroutine( by go 
func(){r:=...  bufferchan<-r }),  because the internal 
bytes.Buffer{} might reaches the certain size always, it cannot provide the 
ablibity to write so much log, the goroutine num might grow fastly.

On Sunday, March 17, 2019 at 1:29:23 PM UTC+8, Burak Serdar wrote:
>
> On Sat, Mar 16, 2019 at 11:10 PM > 
> wrote: 
> > 
> > I'm not sure if the log produce more than consume by the func 'centrale' 
> in the same period of time, will 'buff' be larger and larger and cost too 
> much memory. 
> > For example, The log comes from network and write into a disk but 
> network speed is greater than disk write speed. So is there any way to 
> control the 'buff' length. 
>
> There might be a simpler solution, but it needs to be tuned and tested 
> to see if it behaves better: you can use a buffered string channel to 
> store log messages. Your writer goroutine would read from this 
> channel, and store the log messages in an internal bytes.Buffer{} 
> until it reaches a certain size, and then writes it out. This should 
> limit the memory usage based on the channel buffer size and buffer 
> threshold. You should also have a time.Ticker to write the buffer 
> periodically if log generation slows down. 
>
>
> > 
> > On Sunday, March 17, 2019 at 12:02:06 AM UTC+8, Thomas S wrote: 
> >> 
> >> Hello, 
> >> 
> >> I have a software needing a lot of logs when processing. 
> >> 
> >> I need to : 
> >> 1- Improve the processing time by doing non-blocking logs 
> >> 2- Be safe with goroutines parallel logs 
> >> 
> >> fmt package doesn't match with (1) & (2) 
> >> log package doesn't match with (1) 
> >> 
> >> Here is my proposal. What do you think ? 
> >> 
> >> Design : 
> >> 
> >> [Log functions] -channel>[Buffering function (goroutine)] 
> channel> [Printing function (goroutine)] 
> >> 
> >> 
> >> package glog 
> >> 
> >> import ( 
> >> "container/list" 
> >> "encoding/json" 
> >> "fmt" 
> >> ) 
> >> 
> >> /* 
> >> ** Public 
> >>  */ 
> >> 
> >> func Println(data ...interface{}) { 
> >> bufferChan <- fmt.Sprintln(data...) 
> >> } 
> >> 
> >> func Print(data ...interface{}) { 
> >> bufferChan <- fmt.Sprint(data...) 
> >> } 
> >> 
> >> func Printf(s string, data ...interface{}) { 
> >> go func() { 
> >> r := fmt.Sprintf(s, data...) 
> >> bufferChan <- r 
> >> }() 
> >> } 
> >> 
> >> /* 
> >> ** Private 
> >>  */ 
> >> 
> >> var bufferChan chan string 
> >> var outChan chan string 
> >> 
> >> func init() { 
> >> bufferChan = make(chan string) 
> >> outChan = make(chan string) 
> >> go centrale() 
> >> go buffout() 
> >> } 
> >> 
> >> func centrale() { 
> >> var buff *list.List 
> >> buff = list.New() 
> >> for { 
> >> if buff.Len() > 0 { 
> >> select { 
> >> case outChan <- buff.Front().Value.(string): 
> >> buff.Remove(buff.Front()) 
> >> case tmp := <-bufferChan: 
> >> buff.PushBack(tmp) 
> >> } 
> >> 
> >> } else { 
> >> tmp := <-bufferChan 
> >> buff.PushBack(tmp) 
> >> } 
> >> } 
> >> } 
> >> 
> >> func buffout() { 
> >> for { 
> >> data := <-outChan 
> >> fmt.Print(data) 
> >> } 
> >> } 
> >> 
> >> 
> >> 
> >> It works well for now, I want to be sure to not miss anything as it's a 
> very important part of the code. 
> >> 
> >> Thank you for your review. 
> >> 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "golang-nuts" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to golang-nuts...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Elastic synchronised logging : What do you think ?

2019-03-16 Thread Burak Serdar
On Sat, Mar 16, 2019 at 11:10 PM  wrote:
>
> I'm not sure if the log produce more than consume by the func 'centrale' in 
> the same period of time, will 'buff' be larger and larger and cost too much 
> memory.
> For example, The log comes from network and write into a disk but network 
> speed is greater than disk write speed. So is there any way to control the 
> 'buff' length.

There might be a simpler solution, but it needs to be tuned and tested
to see if it behaves better: you can use a buffered string channel to
store log messages. Your writer goroutine would read from this
channel, and store the log messages in an internal bytes.Buffer{}
until it reaches a certain size, and then writes it out. This should
limit the memory usage based on the channel buffer size and buffer
threshold. You should also have a time.Ticker to write the buffer
periodically if log generation slows down.


>
> On Sunday, March 17, 2019 at 12:02:06 AM UTC+8, Thomas S wrote:
>>
>> Hello,
>>
>> I have a software needing a lot of logs when processing.
>>
>> I need to :
>> 1- Improve the processing time by doing non-blocking logs
>> 2- Be safe with goroutines parallel logs
>>
>> fmt package doesn't match with (1) & (2)
>> log package doesn't match with (1)
>>
>> Here is my proposal. What do you think ?
>>
>> Design :
>>
>> [Log functions] -channel>[Buffering function (goroutine)] 
>> channel> [Printing function (goroutine)]
>>
>>
>> package glog
>>
>> import (
>> "container/list"
>> "encoding/json"
>> "fmt"
>> )
>>
>> /*
>> ** Public
>>  */
>>
>> func Println(data ...interface{}) {
>> bufferChan <- fmt.Sprintln(data...)
>> }
>>
>> func Print(data ...interface{}) {
>> bufferChan <- fmt.Sprint(data...)
>> }
>>
>> func Printf(s string, data ...interface{}) {
>> go func() {
>> r := fmt.Sprintf(s, data...)
>> bufferChan <- r
>> }()
>> }
>>
>> /*
>> ** Private
>>  */
>>
>> var bufferChan chan string
>> var outChan chan string
>>
>> func init() {
>> bufferChan = make(chan string)
>> outChan = make(chan string)
>> go centrale()
>> go buffout()
>> }
>>
>> func centrale() {
>> var buff *list.List
>> buff = list.New()
>> for {
>> if buff.Len() > 0 {
>> select {
>> case outChan <- buff.Front().Value.(string):
>> buff.Remove(buff.Front())
>> case tmp := <-bufferChan:
>> buff.PushBack(tmp)
>> }
>>
>> } else {
>> tmp := <-bufferChan
>> buff.PushBack(tmp)
>> }
>> }
>> }
>>
>> func buffout() {
>> for {
>> data := <-outChan
>> fmt.Print(data)
>> }
>> }
>>
>>
>>
>> It works well for now, I want to be sure to not miss anything as it's a very 
>> important part of the code.
>>
>> Thank you for your review.
>>
> --
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Elastic synchronised logging : What do you think ?

2019-03-16 Thread caijianyu
I'm not sure if the log produce more than consume by the func 'centrale' in 
the same period of time, will 'buff' be larger and larger and cost too much 
memory.
For example, The log comes from network and write into a disk but network 
speed is greater than disk write speed. So is there any way to control the 
'buff' length.

On Sunday, March 17, 2019 at 12:02:06 AM UTC+8, Thomas S wrote:
>
> Hello,
>
> I have a software needing a lot of logs when processing.
>
> I need to :
> 1- Improve the processing time by doing non-blocking logs
> 2- Be safe with goroutines parallel logs
>
> fmt package doesn't match with (1) & (2)
> log package doesn't match with (1)
>
> *Here is my proposal. What do you think ?*
>
> *Design :*
>
> [Log functions] -channel>[Buffering function (goroutine)] 
> channel> [Printing function (goroutine)]
>
>
> package glog
>
> import (
> "container/list"
> "encoding/json"
> "fmt"
> )
>
> /*
> ** Public
>  */
>
> func Println(data ...interface{}) {
> bufferChan <- fmt.Sprintln(data...)
> }
>
> func Print(data ...interface{}) {
> bufferChan <- fmt.Sprint(data...)
> }
>
> func Printf(s string, data ...interface{}) {
> go func() {
> r := fmt.Sprintf(s, data...)
> bufferChan <- r
> }()
> }
>
> /*
> ** Private
>  */
>
> var bufferChan chan string
> var outChan chan string
>
> func init() {
> bufferChan = make(chan string)
> outChan = make(chan string)
> go centrale()
> go buffout()
> }
>
> func centrale() {
> var buff *list.List
> buff = list.New()
> for {
> if buff.Len() > 0 {
> select {
> case outChan <- buff.Front().Value.(string):
> buff.Remove(buff.Front())
> case tmp := <-bufferChan:
> buff.PushBack(tmp)
> }
>
> } else {
> tmp := <-bufferChan
> buff.PushBack(tmp)
> }
> }
> }
>
> func buffout() {
> for {
> data := <-outChan
> fmt.Print(data)
> }
> }
>
>
>
> It works well for now, I want to be sure to not miss anything as it's a 
> very important part of the code.
>
> Thank you for your review.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.