Re: [go-nuts] Is there any way to produce etcd watch chan be closed

2022-03-27 Thread 袁成若
Ok, thanks.

Ian Lance Taylor  于2022年3月26日周六 02:45写道:

> On Fri, Mar 25, 2022 at 11:41 AM 袁成若  wrote:
> >
> > I met a problem about etcd watch channel. seems that be closed,  but i
> can not reproduce it.
> >
> > like this:
> >
> > ```
> > for {
> >  ach := etcdClientV3.Watch(context.Background(), "/test",
> clientv3.WithPrefix())
> >  for {
> >   select {
> >  case wch := <- ach {
> >fmt.Println("recv chan")
> >  }
> >   }
> >  }
> > }
> > ```
> > the program print recv chan all the time. but I cannot reproduce  it ,
> Is there any way to reproduce it
>
> It sounds like you are describing a problem with the
> https://github.com/etcd-io/etcd/ package.  I suggest that you ask
> there.
>
> Ian
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/CAFTq8SfPhqwzD302QOQV022vhsjfGR%3D6%3DR_RGRkBvO63XO-U2g%40mail.gmail.com.


Re: [go-nuts] Looked at using Go ... nil/SEGV really bothered me .. Go2 Proposal?

2022-03-27 Thread Sam Hughes
@Michael%20Toy, I disagree! You're entirely correct to like what TypeScript 
does! It's a really cool example of a "modern language" strutting its 
stuff -- it's called "Narrowing": 
https://microsoft.github.io/TypeScript-New-Handbook/chapters/narrowing/ As 
a transpiler targeting Javascript, I think it's entirely appropriate. In 
the AST construction phase, each logical constriction of the possible space 
of types that could describe a value is propagated onwards. If you accept a 
parameter, "pet", as iCat or iDog iPlant, if you want that reference fields 
specific to animals, you must first cause an event which would cull the 
search space for any value that is not a mammal. If you want to try 
spray(hose(pet)), you need to either check that the pet isn't a cat, or 
else handle the possible error result.

The github.com/jackc/pgtypes repo has a really ergonomic approach to curing 
the nillness: package the value with a boolean, and receive a guaranteed 
value when you unwrap it. If you cared about whether it had a value, check 
that bool.

Otherwise, resolve that maybe-value to a value ASAP, and go from there.

@Axel, I really did mean what I said. TypeScript and Rust are both very 
effective at making you know that you're sure you know that the state is 
valid. There are programs that cannot be written into that paradigm, and 
there are also plenty of times when that's inconvenient and takes a while 
to spell out. Of course, that safety comes with tedious requirements of 
proofs, and different needs yield different tradeoffs, and you typically 
get an "unsafe" escape hatch. TypeScript is no different, except that the 
"unsafe" package is called "Javescript." The problem in Go, if you read 
through the proposals @Lance%20Taylor%20Armstrong shared, isn't that it 
wouldn't help, rather that Go is already in a good spot safety-speed-wise, 
and you can nudge it over it you want it somewhere else; just don't expect 
to make everyone else nudge it over simultaneously.

@Brian%20Candler, If Go allowed operator overloading or custom allocators, 
that'd be fine. Go doesn't support such, and if you opened a proposal for 
either, I'd bet you $5 it gets closed immediately. The more convenient 
approach is to implement a type like below. If you disagree? So help 
meI'll I'll disagree with you?

```Go
type Box[T any] *T

func (ptr Box[T]) Raw() *T {
  return (*T)(ptr)
}

func (ptr Box[T]) IsNil() bool {
  return ptr.Raw() == nil
}

func (ptr Box[T]) Value() (checked T) {
  if blind := ptr.Raw(),  ok := !IsNil() bool; ok {
checked = *blind
  }
  return checked
}

I recently saw a talk called "It's all about Tradeoffs". This is an 
excellent example of that. Maybe the above could be improved by static 
checking and optimization, but it's never as cheap as just trusting there's 
something there, so long as there actually is something there.
On Friday, March 25, 2022 at 1:41:07 PM UTC-5 Michael Toy wrote:

> The discussion is quite informative for me to read, thanks for responding. 
> Go uses nil in a way which I don't quite yet grok, and so I had no idea if 
> it was even a reasonable thing to wish for. Today I am writing in 
> Typescript, and the way null is integrated into the type system now (after 
> a while) feels natural and helpful to me.
>
> Sam is correct, there is bug in my Go snippet in the post. For humor value 
> only, I would like to point out that the imaginary Go compiler I was 
> wishing for would have found that bug!
>
> I think Brian gets to the heart of my question, which is "If I really 
> understood Go, would I want something like this". I am hearing, "No, you 
> would not"
>
> I think if I were to have a long conversation with Axel about "what is it 
> that makes programs robust and maintainable" we'd go round in circles a 
> bit, as should happen any time you talk about something complex and 
> important. I think I disagree with some statements, but even the 
> disagreement is super helpful.
>
> Thanks for the discussion!
>
> -Michael Toy
>
> On Thursday, March 24, 2022 at 12:22:44 AM UTC-10 Brian Candler wrote:
>
>> The OP hasn't said specifically which language or feature they're 
>> comparing with, but I wonder if they're asking for a pointer type which is 
>> never allowed to be nil, enforced at compile time.  If so, a normal 
>> pointer-which-may-be-nil would have to be represented as a Maybe[*T] or 
>> union { *T | nil }. To use such a pointer value at runtime you'd have to 
>> deconstruct it via a case statement or similar, with separate branches for 
>> where the value is nil or not-nil. I am sure there have been proposals 
>> along those lines floated here before.
>>
>> I don't think this would negatively affect code readability, because a 
>> function which takes *T as an argument can be sure that the value passed in 
>> can never be nil (the compiler would not allow a value of type Maybe[*T] to 
>> be passed).  Conversely, a function which accepts Maybe[*T] as an argument 

[go-nuts] HTTP/2 client creating multiple TCP Connections

2022-03-27 Thread envee
I have a telecom client application which connects to an HTTP/2 server (5G 
telecom application server, to be exact). 
At startup, I create an HTTP/2 client using the net/http2 Transport.
It starts multiple goroutines each of which share/use the same HTTP/2 
client connection to send HTTP POST requests to the server.

I was of the understanding that if an HTTP/2 client is reused across 
multiple goroutines, it will not end up creating multiple TCP connections.

What I observed was that this is not true with (nearly) each goroutine 
request triggering the creation of a TCP connection. This causes my 
application to run out of file descriptors. I could possibly get around 
this by setting the ulimit to be unlimited.

I then set the UseStrictMaximumConcurrentStreams flag in the 
http2/Transport object to True and this then restricted the client 
application to establish a single TCP connection.

But the issue I face is that at when I try to send extremely high number of 
concurrent requests (more than about 3000-4000 per second), I see an empty 
JSON request body being sent out.

So I guess I have 2 issues :

1) Why is my HTTP/2 client creating multiple TCP connections when the 
http2.Transport.StrictMaxConcurrentStreams is FALSE ? I am guessing this is 
because of a large number concurrent requests being made, but still I 
expect the http2 transport to manage that transparently.

2) When I do manage to create just a single TCP connection (by setting 
StrictMaxConcurrentStreams=TRUE) over which requests/responses are 
multiplexed, I see a NULL payload being sent in my HTTP/2 request.

Regards,
Neeraj


 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/b6aaca96-7bf1-4a8a-af57-78740742c6a0n%40googlegroups.com.


Re: [go-nuts] Investigating suspected memory leak based on StackInUse or HeapSys stats

2022-03-27 Thread robert engels
Because they are tracked separately. See allocSpan() in mheap.go.

> On Mar 27, 2022, at 11:50 AM, Shlomi Amit  wrote:
> 
> Thank Robert.
> My C code does call back Go code, but no recursion there.
> If stack allocations are part of the heap, why it is not reflected in 
> heapAlloc and pprof?
> 
> On Sun, Mar 27, 2022, 18:52 robert engels  > wrote:
> Neither of those track C allocations - unless the C is calling back into Go.
> 
> You can review the low level usage in stack.go
> 
> Note that Go stack allocations are for the most part done in the heap, and 
> are dynamically managed across Go routines.
> 
> Are you possibly doing some very deep recursions are times? Because the 
> stacks will not shrink under certain conditions.
> 
> Since they are allocated on the heap, you may be encountering a situation 
> where you are extended the stack to a degree that the GC cannot keep up 
> 
>> On Mar 26, 2022, at 4:01 PM, Shlomi Amit > > wrote:
>> 
>> I've already checked process threads using ps command, and threads are not 
>> leaking.
>> Still, it is more than possible that there are memory leaks within my Cgo 
>> code, or gstreamer as well (Which I'm running with Cgo as well)
>> 
>> But before I'm starting to chase this path, I would like to be sure heapSys 
>> or StackInUse stats are also including Cgo memory allocations, because my 
>> understanding was that Go runtime is not reflecting memory allocations done 
>> from C code.
>> 
>> I've aleady found Go memory leak before with pprof (Due to leak seen in 
>> heapObjects count) and another leak in goRoutine and fix them... But now I'm 
>> with the mystery of pprof showing low memory usage but stackInUse & HeapSys 
>> are increasing and are not reflected in pprof.
>> 
>> So - Are HeapSys or Stack In use includes memory allocations done in C code? 
>> If so, I'll move on and start chasing C code (Possibly using Valgrind? Can I 
>> use it to profile my Go application with Cgo?)
>> 
>> On Sat, Mar 26, 2022, 22:54 robert engels > > wrote:
>> Are you certain your CGo code isn’t creating threads?
>> 
>>> On Mar 26, 2022, at 1:10 PM, Shlomi Amit >> > wrote:
>>> 
>>> Yes. I already monitoring the runtime stat of number of go routines (Can be 
>>> seen in the screenshot as well) and it's not increasing.
>>> 
>>> On Sat, Mar 26, 2022, 20:40 robert engels >> > wrote:
>>> Are you certain the number of Go routines is not increasing - each Go 
>>> routine requires stack. See https://tpaschalis.github.io/goroutines-size/ 
>>> 
>>> 
 On Mar 24, 2022, at 3:18 PM, Shlomi Amit >>> > wrote:
 
 Hi.
 
 I’m trying to find a memory leak in my application.
 I’ve added some runtime memory stats to the logs (Heap & Stack stats.. go 
 routine and heapObject counts).
 I can see that from time to time m.StackInuse is increasing without any 
 obvious reason from the application perspective (At least not one that I 
 can find yet).
 Here some additional runtime mem stats (Taken at the same time):
 StackInUse 56MB
 HeapSys: 147MB.
 HeapAlloc: 97MB
 
 While StackInUse and HeapSys seems to increase from time to time, 
 HeapAlloc stays about the same during 6 hours while the application is 
 running.
 There seems to be a bit of correlation between StackInUse and HeapSys, but 
 overtime StackInUse increase more than HeapSys. (Is HeapSize includes 
 StackInUse?)
 
 In addition to adding runtime memory logs, I’m also creating periodic heap 
 profile dumps.
 My problem is that when analyzing the heap with pprof, it gives me no clue 
 why StackInUse is so high.
 The pprof inuse_space shows:
 “Showing nodes accounting for 55.31MB, 93.25% of 59.31MB total”
 What this total MB represent? It doesn’t seem to match HeapAlloc, HeapSys 
 or StackInUse.
 Does pprof heap profile even include StackInUse?
 
 I really need to understand where the leak is coming from, but after 
 looking in many places, the memory stats are still not clear to me, and 
 neither what memory stat pprof heap profile really represent.
 Note that I’m also logging HeapObjects count and I don’t see any leak 
 there… It’s just StackInUse increasing from time to time (As it seems, 
 it's always double itself... ~8MB->16MB->32), and HeapSys.
 
 Note that I’m also using Cgo, but my understanding is Cgo memory 
 allocations will not be reflected by the runtime memory stats. Is this 
 correct and I assume if runtime memory stats are increasing this is 
 defiantly because of go code and not C code?
 I hope I was clear, but added a screenshot for the different memory stats.
 Marked in red the point in times that stackInUse increase. (My current 
 

Re: [go-nuts] Investigating suspected memory leak based on StackInUse or HeapSys stats

2022-03-27 Thread Shlomi Amit
Thank Robert.
My C code does call back Go code, but no recursion there.
If stack allocations are part of the heap, why it is not reflected in
heapAlloc and pprof?

On Sun, Mar 27, 2022, 18:52 robert engels  wrote:

> Neither of those track C allocations - unless the C is calling back into
> Go.
>
> You can review the low level usage in stack.go
>
> Note that Go stack allocations are for the most part done in the heap, and
> are dynamically managed across Go routines.
>
> Are you possibly doing some very deep recursions are times? Because the
> stacks will not shrink under certain conditions.
>
> Since they are allocated on the heap, you may be encountering a situation
> where you are extended the stack to a degree that the GC cannot keep up
>
> On Mar 26, 2022, at 4:01 PM, Shlomi Amit  wrote:
>
> I've already checked process threads using ps command, and threads are not
> leaking.
> Still, it is more than possible that there are memory leaks within my Cgo
> code, or gstreamer as well (Which I'm running with Cgo as well)
>
> But before I'm starting to chase this path, I would like to be sure
> heapSys or StackInUse stats are also including Cgo memory allocations,
> because my understanding was that Go runtime is not reflecting memory
> allocations done from C code.
>
> I've aleady found Go memory leak before with pprof (Due to leak seen in
> heapObjects count) and another leak in goRoutine and fix them... But now
> I'm with the mystery of pprof showing low memory usage but stackInUse &
> HeapSys are increasing and are not reflected in pprof.
>
> So - Are HeapSys or Stack In use includes memory allocations done in C
> code? If so, I'll move on and start chasing C code (Possibly using
> Valgrind? Can I use it to profile my Go application with Cgo?)
>
> On Sat, Mar 26, 2022, 22:54 robert engels  wrote:
>
>> Are you certain your CGo code isn’t creating threads?
>>
>> On Mar 26, 2022, at 1:10 PM, Shlomi Amit  wrote:
>>
>> Yes. I already monitoring the runtime stat of number of go routines (Can
>> be seen in the screenshot as well) and it's not increasing.
>>
>> On Sat, Mar 26, 2022, 20:40 robert engels  wrote:
>>
>>> Are you certain the number of Go routines is not increasing - each Go
>>> routine requires stack. See
>>> https://tpaschalis.github.io/goroutines-size/
>>>
>>> On Mar 24, 2022, at 3:18 PM, Shlomi Amit  wrote:
>>>
>>> Hi.
>>>
>>> I’m trying to find a memory leak in my application.
>>> I’ve added some runtime memory stats to the logs (Heap & Stack stats..
>>> go routine and heapObject counts).
>>> I can see that from time to time m.StackInuse is increasing without any
>>> obvious reason from the application perspective (At least not one that I
>>> can find yet).
>>> Here some additional runtime mem stats (Taken at the same time):
>>> StackInUse 56MB
>>> HeapSys: 147MB.
>>> HeapAlloc: 97MB
>>>
>>> While StackInUse and HeapSys seems to increase from time to time,
>>> HeapAlloc stays about the same during 6 hours while the application is
>>> running.
>>> There seems to be a bit of correlation between StackInUse and HeapSys,
>>> but overtime StackInUse increase more than HeapSys. (Is HeapSize includes
>>> StackInUse?)
>>>
>>> In addition to adding runtime memory logs, I’m also creating periodic
>>> heap profile dumps.
>>> My problem is that when analyzing the heap with pprof, it gives me no
>>> clue why StackInUse is so high.
>>> The pprof inuse_space shows:
>>> “Showing nodes accounting for 55.31MB, 93.25% of 59.31MB total”
>>> What this total MB represent? It doesn’t seem to match HeapAlloc,
>>> HeapSys or StackInUse.
>>> Does pprof heap profile even include StackInUse?
>>>
>>> I really need to understand where the leak is coming from, but after
>>> looking in many places, the memory stats are still not clear to me, and
>>> neither what memory stat pprof heap profile really represent.
>>> Note that I’m also logging HeapObjects count and I don’t see any leak
>>> there… It’s just StackInUse increasing from time to time (As it seems, it's
>>> always double itself... ~8MB->16MB->32), and HeapSys.
>>>
>>> Note that I’m also using Cgo, but my understanding is Cgo memory
>>> allocations will not be reflected by the runtime memory stats. Is this
>>> correct and I assume if runtime memory stats are increasing this is
>>> defiantly because of go code and not C code?
>>> I hope I was clear, but added a screenshot for the different memory
>>> stats.
>>> Marked in red the point in times that stackInUse increase. (My current
>>> understanding which might be wrong, is that stackInUse is not included in
>>> HeapSys, this is why they are stacked in the graph).
>>>
>>> I know my write is a bit messed up and you might not really be sure
>>> what's being asked, so in if I'll try to summarize:
>>>
>>>
>>>1. What stackInUse represents? Is it part of HeapSys?
>>>2. What HeapSys represents? (Both it and StackInUse are way more
>>>high than heapAlloc)
>>>3. Why pprof inuse_space doesn't seem to have any notion 

Re: [go-nuts] Investigating suspected memory leak based on StackInUse or HeapSys stats

2022-03-27 Thread robert engels
Neither of those track C allocations - unless the C is calling back into Go.

You can review the low level usage in stack.go

Note that Go stack allocations are for the most part done in the heap, and are 
dynamically managed across Go routines.

Are you possibly doing some very deep recursions are times? Because the stacks 
will not shrink under certain conditions.

Since they are allocated on the heap, you may be encountering a situation where 
you are extended the stack to a degree that the GC cannot keep up 

> On Mar 26, 2022, at 4:01 PM, Shlomi Amit  wrote:
> 
> I've already checked process threads using ps command, and threads are not 
> leaking.
> Still, it is more than possible that there are memory leaks within my Cgo 
> code, or gstreamer as well (Which I'm running with Cgo as well)
> 
> But before I'm starting to chase this path, I would like to be sure heapSys 
> or StackInUse stats are also including Cgo memory allocations, because my 
> understanding was that Go runtime is not reflecting memory allocations done 
> from C code.
> 
> I've aleady found Go memory leak before with pprof (Due to leak seen in 
> heapObjects count) and another leak in goRoutine and fix them... But now I'm 
> with the mystery of pprof showing low memory usage but stackInUse & HeapSys 
> are increasing and are not reflected in pprof.
> 
> So - Are HeapSys or Stack In use includes memory allocations done in C code? 
> If so, I'll move on and start chasing C code (Possibly using Valgrind? Can I 
> use it to profile my Go application with Cgo?)
> 
> On Sat, Mar 26, 2022, 22:54 robert engels  > wrote:
> Are you certain your CGo code isn’t creating threads?
> 
>> On Mar 26, 2022, at 1:10 PM, Shlomi Amit > > wrote:
>> 
>> Yes. I already monitoring the runtime stat of number of go routines (Can be 
>> seen in the screenshot as well) and it's not increasing.
>> 
>> On Sat, Mar 26, 2022, 20:40 robert engels > > wrote:
>> Are you certain the number of Go routines is not increasing - each Go 
>> routine requires stack. See https://tpaschalis.github.io/goroutines-size/ 
>> 
>> 
>>> On Mar 24, 2022, at 3:18 PM, Shlomi Amit >> > wrote:
>>> 
>>> Hi.
>>> 
>>> I’m trying to find a memory leak in my application.
>>> I’ve added some runtime memory stats to the logs (Heap & Stack stats.. go 
>>> routine and heapObject counts).
>>> I can see that from time to time m.StackInuse is increasing without any 
>>> obvious reason from the application perspective (At least not one that I 
>>> can find yet).
>>> Here some additional runtime mem stats (Taken at the same time):
>>> StackInUse 56MB
>>> HeapSys: 147MB.
>>> HeapAlloc: 97MB
>>> 
>>> While StackInUse and HeapSys seems to increase from time to time, HeapAlloc 
>>> stays about the same during 6 hours while the application is running.
>>> There seems to be a bit of correlation between StackInUse and HeapSys, but 
>>> overtime StackInUse increase more than HeapSys. (Is HeapSize includes 
>>> StackInUse?)
>>> 
>>> In addition to adding runtime memory logs, I’m also creating periodic heap 
>>> profile dumps.
>>> My problem is that when analyzing the heap with pprof, it gives me no clue 
>>> why StackInUse is so high.
>>> The pprof inuse_space shows:
>>> “Showing nodes accounting for 55.31MB, 93.25% of 59.31MB total”
>>> What this total MB represent? It doesn’t seem to match HeapAlloc, HeapSys 
>>> or StackInUse.
>>> Does pprof heap profile even include StackInUse?
>>> 
>>> I really need to understand where the leak is coming from, but after 
>>> looking in many places, the memory stats are still not clear to me, and 
>>> neither what memory stat pprof heap profile really represent.
>>> Note that I’m also logging HeapObjects count and I don’t see any leak 
>>> there… It’s just StackInUse increasing from time to time (As it seems, it's 
>>> always double itself... ~8MB->16MB->32), and HeapSys.
>>> 
>>> Note that I’m also using Cgo, but my understanding is Cgo memory 
>>> allocations will not be reflected by the runtime memory stats. Is this 
>>> correct and I assume if runtime memory stats are increasing this is 
>>> defiantly because of go code and not C code?
>>> I hope I was clear, but added a screenshot for the different memory stats.
>>> Marked in red the point in times that stackInUse increase. (My current 
>>> understanding which might be wrong, is that stackInUse is not included in 
>>> HeapSys, this is why they are stacked in the graph).
>>> 
>>> I know my write is a bit messed up and you might not really be sure what's 
>>> being asked, so in if I'll try to summarize:
>>> 
>>> 
>>> What stackInUse represents? Is it part of HeapSys?
>>> What HeapSys represents? (Both it and StackInUse are way more high than 
>>> heapAlloc)
>>> Why pprof inuse_space doesn't seem to have any notion of stuff which was 
>>> allocated by StackInUse or 

[go-nuts] Java to Go converter - 2

2022-03-27 Thread alex-coder
After several months of switching from Java to Golang, it seemed to me that
it would be interesting to make the translation of Java code into Golang 
automatically.
The text below shows what has been done so far.

The work is not a prototype, but rather indicates the possibility of 
achieving a result.
Therefore, I deliberately simplify the development context of the Converter 
where it was 
possible. 

At first it seemed important to me that between Java and Go there is a 
difference 
between the implementation of the Dynamic Dispatching, more precisely, 
there is no 
Dynamic Dispatching in Go. The applied solution in the current 
implementation 
looks not only ugly but even violates the several very important rules of 
the OO design, 
I'm not kidding here. But this option looks like that it will be working. 

Onward I will provide the 4 code samples in Java, followed by the 
automatically 
generated Golang code and comments as needed. 

*1. Of course, I started with the most popular program: "Hello World".*

package main;

public class HelloWorld {
  public static void main(  String[] args){
System.out.println("Hello World");
  }
}

*Converter gave out:*

package main

import (
"fmt"
"os"
)

type HelloWorld struct{}

func main() {

var args []string = os.Args

var hw HelloWorld = HelloWorld{}
hw.HelloWorld_main(args)
}

/** generated method **/
func (helloWorld *HelloWorld) HelloWorld_main(args []string) {
fmt.Println("Hello World")
}

*2. Next, it was interesting to deal with the problem of a simple 
inheritance.*

package main;

public class TestInheritance {
  public static void main(  String[] args){
Inheritance inh=null;
inh=new Second();
inh.hello();
inh=new Third();
inh.hello();
  }
}
public interface Inheritance {
  public void hello();
}
class Second implements Inheritance {
  public void hello(){
System.out.println("Second");
  }
}
class Third implements Inheritance {
  public void hello(){
System.out.println("Third");
  }
}
 
*Converter gave out:*

package main

import (
"fmt"
"os"
)

type TestInheritance struct{}

func main() {

var args []string = os.Args

var ti TestInheritance = TestInheritance{}
ti.TestInheritance_main(args)
}

/** generated method **/
func (testInheritance *TestInheritance) TestInheritance_main(args []string) 
{

var inh Inheritance
inh = AddressSecond(Second{})
inh.hello()
inh = AddressThird(Third{})
inh.hello()
}

type Inheritance interface {
hello()
}
type Second struct{}

func (second *Second) hello() {
fmt.Println("Second")
}

type Third struct{}

func (third *Third) hello() {
fmt.Println("Third")
}

func AddressSecond(s Second) *Second { return  }
func AddressThird(t Third) *Third{ return  }


*3. In the following example, it is necessary to correctly definea 
common interface for the inheritance tree.*

package no.packeges;

public class TestExtension {
  public static void main(  String[] args){
TestExtension te=new TestExtension();
te.hello();
te=new Second();
te.hello();
te=new Third();
te.hello();
te=new Fourth();
te.hello();
  }
  public void hello(){
System.out.println("hello");
  }
}
class Second extends TestExtension {
  public void hello(){
System.out.println("Second");
  }
}
class Third extends TestExtension {
  public void hello(){
System.out.println("Third");
  }
}
class Fourth extends Third {
  public void hello(){
System.out.println("Fourth");
  }
}

*Converter gave out:*

package main

import (
"fmt"
"os"
)

type TestExtension struct{}

func main() {

var args []string = os.Args

var te TestExtension = TestExtension{}
te.TestExtension_main(args)
}
func (testExtension *TestExtension) hello() {
fmt.Println("hello")
}

/** generated method **/
func (testExtension *TestExtension) TestExtension_main(args []string) {

var te ITestExtension = AddressTestExtension(TestExtension{})
te.hello()
te = AddressSecond(Second{})
te.hello()
te = AddressThird(Third{})
te.hello()
te = AddressFourth(Fourth{})
te.hello()
}

type Second struct {
TestExtension
}

func (second *Second) hello() {
fmt.Println("Second")
}

type Third struct {
TestExtension
}

func (third *Third) hello() {
fmt.Println("Third")
}

type Fourth struct {
Third
}

func (fourth *Fourth) hello() {
fmt.Println("Fourth")
}

type ITestExtension interface { 
/** Generated Method */
hello()
}

func AddressSecond(s Second) *Second  { return  }
func AddressThird(t Third) *Third { return  }
func AddressTestExtension(t TestExtension) *TestExtension { return  }
func AddressFourth(f Fourth) *Fourth  { return  }





*4. Now the Dynamic Dispatching