Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package fortio for openSUSE:Factory checked 
in at 2022-04-17 23:50:17
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/fortio (Old)
 and      /work/SRC/openSUSE:Factory/.fortio.new.1941 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "fortio"

Sun Apr 17 23:50:17 2022 rev:7 rq:970424 version:1.26.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/fortio/fortio.changes    2022-04-05 
19:56:02.141844080 +0200
+++ /work/SRC/openSUSE:Factory/.fortio.new.1941/fortio.changes  2022-04-17 
23:51:54.010476434 +0200
@@ -1,0 +2,7 @@
+Sat Apr 16 09:03:30 UTC 2022 - ka...@b1-systems.de
+
+- Update to version 1.26.0:
+  * Fix tcp load with larger than buffer (32k) payload (#549)
+  * no catchup mode (fixed/set maximum qps; skip requests when falling behind) 
(#544)
+
+-------------------------------------------------------------------

Old:
----
  fortio-1.25.0.tar.gz

New:
----
  fortio-1.26.0.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ fortio.spec ++++++
--- /var/tmp/diff_new_pack.dilcoa/_old  2022-04-17 23:51:54.650477310 +0200
+++ /var/tmp/diff_new_pack.dilcoa/_new  2022-04-17 23:51:54.658477321 +0200
@@ -19,7 +19,7 @@
 %define __arch_install_post export NO_BRP_STRIP_DEBUG=true
 
 Name:           fortio
-Version:        1.25.0
+Version:        1.26.0
 Release:        0
 Summary:        Load testing library, command line tool, advanced echo server 
and web UI
 License:        Apache-2.0

++++++ _service ++++++
--- /var/tmp/diff_new_pack.dilcoa/_old  2022-04-17 23:51:54.682477354 +0200
+++ /var/tmp/diff_new_pack.dilcoa/_new  2022-04-17 23:51:54.686477360 +0200
@@ -3,7 +3,7 @@
     <param name="url">https://github.com/fortio/fortio</param>
     <param name="scm">git</param>
     <param name="exclude">.git</param>
-    <param name="revision">v1.25.0</param>
+    <param name="revision">v1.26.0</param>
     <param name="versionformat">@PARENT_TAG@</param>
     <param name="changesgenerate">enable</param>
     <param name="versionrewrite-pattern">v(.*)</param>
@@ -16,7 +16,7 @@
     <param name="compression">gz</param>
   </service>
   <service name="go_modules" mode="disabled">
-    <param name="archive">fortio-1.25.0.tar.gz</param>
+    <param name="archive">fortio-1.26.0.tar.gz</param>
   </service>
 </services>
 

++++++ _servicedata ++++++
--- /var/tmp/diff_new_pack.dilcoa/_old  2022-04-17 23:51:54.702477382 +0200
+++ /var/tmp/diff_new_pack.dilcoa/_new  2022-04-17 23:51:54.706477387 +0200
@@ -1,6 +1,6 @@
 <servicedata>
 <service name="tar_scm">
                 <param name="url">https://github.com/fortio/fortio</param>
-              <param 
name="changesrevision">3eed83884d1264b2faa10dc3fc2b0517ae2eae8d</param></service></servicedata>
+              <param 
name="changesrevision">1219538d78b521e348bc2ba6d177049e7993f0a4</param></service></servicedata>
 (No newline at EOF)
 

++++++ fortio-1.25.0.tar.gz -> fortio-1.26.0.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/fortio-1.25.0/README.md new/fortio-1.26.0/README.md
--- old/fortio-1.25.0/README.md 2022-04-04 19:09:59.000000000 +0200
+++ new/fortio-1.26.0/README.md 2022-04-15 22:29:36.000000000 +0200
@@ -46,13 +46,13 @@
 Or download one of the binary distributions, from the 
[releases](https://github.com/fortio/fortio/releases) assets page or for 
instance:
 
 ```shell
-curl -L 
https://github.com/fortio/fortio/releases/download/v1.25.0/fortio-linux_x64-1.25.0.tgz
 \
+curl -L 
https://github.com/fortio/fortio/releases/download/v1.26.0/fortio-linux_x64-1.26.0.tgz
 \
  | sudo tar -C / -xvzpf -
 # or the debian package
-wget 
https://github.com/fortio/fortio/releases/download/v1.25.0/fortio_1.25.0_amd64.deb
-dpkg -i fortio_1.25.0_amd64.deb
+wget 
https://github.com/fortio/fortio/releases/download/v1.26.0/fortio_1.26.0_amd64.deb
+dpkg -i fortio_1.26.0_amd64.deb
 # or the rpm
-rpm -i 
https://github.com/fortio/fortio/releases/download/v1.25.0/fortio-1.25.0-1.x86_64.rpm
+rpm -i 
https://github.com/fortio/fortio/releases/download/v1.26.0/fortio-1.26.0-1.x86_64.rpm
 ```
 
 On a MacOS you can also install Fortio using [Homebrew](https://brew.sh/):
@@ -61,7 +61,7 @@
 brew install fortio
 ```
 
-On Windows, download 
https://github.com/fortio/fortio/releases/download/v1.25.0/fortio_win_1.25.0.zip
 and extract `fortio.exe` to any location, then using the Windows Command 
Prompt:
+On Windows, download 
https://github.com/fortio/fortio/releases/download/v1.26.0/fortio_win_1.26.0.zip
 and extract `fortio.exe` to any location, then using the Windows Command 
Prompt:
 ```
 fortio.exe server
 ```
@@ -89,11 +89,12 @@
 | Flag         | Description, example |
 | -------------|----------------------|
 | `-qps rate` | Queries Per Seconds or 0 for no wait/max qps |
+| `-nocatchup` | Do not try to reach the target qps by going faster when the 
service falls behind and then recovers. Makes QPS an absolute ceiling even if 
the service has some spikes in latency, fortio will not compensate (but also 
won't stress the target more than the set qps). Recommended to use jointly with 
`-uniform`. |
 | `-c connections` | Number of parallel simultaneous connections (and matching 
go routine) |
 | `-t duration` | How long to run the test  (for instance `-t 30m` for 30 
minutes) or 0 to run until ^C, example (default 5s) |
 | `-n numcalls` | Run for exactly this number of calls instead of duration. 
Default (0) is to use duration (-t). |
 | `-payload str` or `-payload-file fname` | Switch to using POST with the 
given payload (see also `-payload-size` for random payload)|
-| `-uniform` | Spread the calls across threads |
+| `-uniform` | Spread the calls in time across threads for a more uniform call 
distribution. Works even better in conjunction with `-nocatchup`. |
 | `-r resolution` | Resolution of the histogram lowest buckets in seconds 
(default 0.001 i.e 1ms), use 1/10th of your expected typical latency |
 | `-H "header: value"` | Can be specified multiple times to add headers 
(including Host:) |
 | `-a`     |  Automatically save JSON result with filename based on labels and 
timestamp |
@@ -106,7 +107,7 @@
 <details>
 <!-- use release/updateFlags.sh to update this section -->
 <pre>
-???????????? 1.25.0 usage:
+???????????? 1.26.0 usage:
 where command is one of: load (load testing), server (starts ui, http-echo,
  redirect, proxies, tcp-echo and grpc ping servers), tcp-echo (only the 
tcp-echo
  server), report (report only UI server), redirect (only the redirect server),
@@ -237,6 +238,9 @@
 is to use duration (-t). Default is 1 when used as grpc ping count.
   -nc-dont-stop-on-eof
         in netcat (nc) mode, don't abort as soon as remote side closes
+  -nocatchup
+        set to exact fixed qps and prevent fortio from trying to catchup when
+the target fails to keep up temporarily
   -offset duration
         Offset of the histogram data
   -p string
@@ -359,16 +363,17 @@
 
 ```Shell
 $ fortio server &
-14:11:05 I fortio_main.go:171> Not using dynamic flag watching (use -config to 
set watch directory)
-Fortio X.Y.Z tcp-echo server listening on [::]:8078
-Fortio X.Y.Z grpc 'ping' server listening on [::]:8079
-Fortio X.Y.Z https redirector server listening on [::]:8081
-Fortio X.Y.Z echo server listening on [::]:8080
-Data directory is /Users/ldemailly/go/src/fortio.org/fortio
+Fortio X.Y.Z tcp-echo server listening on tcp [::]:8078
+Fortio X.Y.Z udp-echo server listening on udp [::]:8078
+Fortio X.Y.Z grpc 'ping' server listening on tcp [::]:8079
+Fortio X.Y.Z https redirector server listening on tcp [::]:8081
+Fortio X.Y.Z http-echo server listening on tcp [::]:8080
+Data directory is /Users/ldemailly/dev/fortio
 UI started - visit:
 http://localhost:8080/fortio/
 (or any host/ip reachable on this server)
-14:11:05 I fortio_main.go:233> All fortio X.Y.Z release goM.m.p servers 
started!
+14:11:05 I fortio_main.go:285> Note: not using dynamic flag watching (use 
-config to set watch directory)
+14:11:05 I fortio_main.go:293> All fortio X.Y.Z unknown goM.m.p servers 
started!
 ```
 
 ### Change the port / binding address
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/fortio-1.25.0/fhttp/http_server.go 
new/fortio-1.26.0/fhttp/http_server.go
--- old/fortio-1.25.0/fhttp/http_server.go      2022-04-04 19:09:59.000000000 
+0200
+++ new/fortio-1.26.0/fhttp/http_server.go      2022-04-15 22:29:36.000000000 
+0200
@@ -356,7 +356,7 @@
 // input for dynamic http server.
 func Serve(port, debugPath string) (*http.ServeMux, net.Addr) {
        startTime = time.Now()
-       mux, addr := HTTPServer("echo", port)
+       mux, addr := HTTPServer("http-echo", port)
        if addr == nil {
                return nil, nil // error already logged
        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/fortio-1.25.0/fnet/network.go 
new/fortio-1.26.0/fnet/network.go
--- old/fortio-1.25.0/fnet/network.go   2022-04-04 19:09:59.000000000 +0200
+++ new/fortio-1.26.0/fnet/network.go   2022-04-15 22:29:36.000000000 +0200
@@ -113,7 +113,7 @@
        }
        lAddr := listener.Addr()
        if len(name) > 0 {
-               fmt.Printf("Fortio %s %s TCP server listening on %s\n", 
version.Short(), name, lAddr)
+               fmt.Printf("Fortio %s %s server listening on %s %s\n", 
version.Short(), name, sockType, lAddr)
        }
        return listener, lAddr
 }
@@ -132,7 +132,7 @@
                return nil, nil
        }
        if len(name) > 0 {
-               fmt.Printf("Fortio %s %s UDP server listening on %s\n", 
version.Short(), name, udpconn.LocalAddr())
+               fmt.Printf("Fortio %s %s server listening on udp %s\n", 
version.Short(), name, udpconn.LocalAddr())
        }
        return udpconn, udpconn.LocalAddr()
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/fortio-1.25.0/fortio_main.go 
new/fortio-1.26.0/fortio_main.go
--- old/fortio-1.25.0/fortio_main.go    2022-04-04 19:09:59.000000000 +0200
+++ new/fortio-1.26.0/fortio_main.go    2022-04-15 22:29:36.000000000 +0200
@@ -167,8 +167,10 @@
 
        maxStreamsFlag = flag.Uint("grpc-max-streams", 0,
                "MaxConcurrentStreams for the grpc server. Default (0) is to 
leave the option unset.")
-       jitterFlag  = flag.Bool("jitter", false, "set to true to de-synchronize 
parallel clients' by 10%")
-       uniformFlag = flag.Bool("uniform", false, "set to true to 
de-synchronize parallel clients' requests uniformly")
+       jitterFlag    = flag.Bool("jitter", false, "set to true to 
de-synchronize parallel clients' by 10%")
+       uniformFlag   = flag.Bool("uniform", false, "set to true to 
de-synchronize parallel clients' requests uniformly")
+       nocatchupFlag = flag.Bool("nocatchup", false,
+               "set to exact fixed qps and prevent fortio from trying to 
catchup when the target fails to keep up temporarily")
        // nc mode flag(s).
        ncDontStopOnCloseFlag = flag.Bool("nc-dont-stop-on-eof", false, "in 
netcat (nc) mode, don't abort as soon as remote side closes")
        // Mirror origin global setting (should be per destination eventually).
@@ -409,6 +411,7 @@
                Uniform:     *uniformFlag,
                RunID:       *bincommon.RunIDFlag,
                Offset:      *offsetFlag,
+               NoCatchUp:   *nocatchupFlag,
        }
        err := ro.AddAccessLogger(*accessLogFileFlag, *accessLogFileFormat)
        if err != nil {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/fortio-1.25.0/periodic/periodic.go 
new/fortio-1.26.0/periodic/periodic.go
--- old/fortio-1.25.0/periodic/periodic.go      2022-04-04 19:09:59.000000000 
+0200
+++ new/fortio-1.26.0/periodic/periodic.go      2022-04-15 22:29:36.000000000 
+0200
@@ -140,9 +140,12 @@
        Uniform bool
        // Optional run id; used by the server to identify runs.
        RunID int64
-       // Optional Offect Duration; to offset the histogram function duration
-       Offset       time.Duration
+       // Optional Offset Duration; to offset the histogram function duration
+       Offset time.Duration
+       // Optional AccessLogger to log every request made. See AddAccessLogger.
        AccessLogger AccessLogger
+       // No catch-up: if true we will do exactly the requested QPS and not 
try to catch up if the target is temporarily slow.
+       NoCatchUp bool
 }
 
 // RunnerResults encapsulates the actual QPS observed and duration histogram.
@@ -160,6 +163,7 @@
        Exactly           int64 // Echo back the requested count
        Jitter            bool
        Uniform           bool
+       NoCatchUp         bool
        RunID             int64 // Echo back the optional run id
        AccessLoggerInfo  string
 }
@@ -311,12 +315,7 @@
        return &r.RunnerOptions // sort of returning this here
 }
 
-func (r *periodicRunner) runQPSSetup() (requestedDuration string, requestedQPS 
string, numCalls int64, leftOver int64) {
-       // AccessLogger info check
-       extra := ""
-       if r.AccessLogger != nil {
-               extra = fmt.Sprintf(" with access logger %s", 
r.AccessLogger.Info())
-       }
+func (r *periodicRunner) runQPSSetup(extra string) (requestedDuration string, 
requestedQPS string, numCalls int64, leftOver int64) {
        // r.Duration will be 0 if endless flag has been provided. Otherwise it 
will have the provided duration time.
        hasDuration := (r.Duration > 0)
        // r.Exactly is > 0 if we use Exactly iterations instead of the 
duration.
@@ -363,15 +362,15 @@
        return requestedDuration, requestedQPS, numCalls, leftOver
 }
 
-func (r *periodicRunner) runNoQPSSetup() (requestedDuration string, numCalls 
int64, leftOver int64) {
+func (r *periodicRunner) runMaxQPSSetup(extra string) (requestedDuration 
string, numCalls int64, leftOver int64) {
        // r.Duration will be 0 if endless flag has been provided. Otherwise it 
will have the provided duration time.
        hasDuration := (r.Duration > 0)
        // r.Exactly is > 0 if we use Exactly iterations instead of the 
duration.
        useExactly := (r.Exactly > 0)
        if !useExactly && !hasDuration {
                // Always log something when waiting for ^C
-               _, _ = fmt.Fprintf(r.Out, "Starting at max qps with %d 
thread(s) [gomax %d] until interrupted\n",
-                       r.NumThreads, runtime.GOMAXPROCS(0))
+               _, _ = fmt.Fprintf(r.Out, "Starting at max qps with %d 
thread(s) [gomax %d] until interrupted%s\n",
+                       r.NumThreads, runtime.GOMAXPROCS(0), extra)
                return
        }
        // else:
@@ -384,12 +383,12 @@
                numCalls = r.Exactly / int64(r.NumThreads)
                leftOver = r.Exactly % int64(r.NumThreads)
                if log.Log(log.Warning) {
-                       _, _ = fmt.Fprintf(r.Out, "for %s (%d per thread + 
%d)\n", requestedDuration, numCalls, leftOver)
+                       _, _ = fmt.Fprintf(r.Out, "for %s (%d per thread + 
%d)%s\n", requestedDuration, numCalls, leftOver, extra)
                }
        } else {
                requestedDuration = fmt.Sprint(r.Duration)
                if log.Log(log.Warning) {
-                       _, _ = fmt.Fprintf(r.Out, "for %s\n", requestedDuration)
+                       _, _ = fmt.Fprintf(r.Out, "for %s%s\n", 
requestedDuration, extra)
                }
        }
        return
@@ -406,11 +405,16 @@
        var numCalls int64
        var leftOver int64 // left over from r.Exactly / numThreads
        var requestedDuration string
+       // AccessLogger info check
+       extra := ""
+       if r.AccessLogger != nil {
+               extra = fmt.Sprintf(" with access logger %s", 
r.AccessLogger.Info())
+       }
        requestedQPS := "max"
        if useQPS {
-               requestedDuration, requestedQPS, numCalls, leftOver = 
r.runQPSSetup()
+               requestedDuration, requestedQPS, numCalls, leftOver = 
r.runQPSSetup(extra)
        } else {
-               requestedDuration, numCalls, leftOver = r.runNoQPSSetup()
+               requestedDuration, numCalls, leftOver = r.runMaxQPSSetup(extra)
        }
        runnersLen := len(r.Runners)
        if runnersLen == 0 {
@@ -486,7 +490,7 @@
        result := RunnerResults{
                r.RunType, r.Labels, start, requestedQPS, requestedDuration,
                actualQPS, elapsed, r.NumThreads, version.Short(), 
functionDuration.Export().CalcPercentiles(r.Percentiles),
-               r.Exactly, r.Jitter, r.Uniform, r.RunID, loggerInfo,
+               r.Exactly, r.Jitter, r.Uniform, r.NoCatchUp, r.RunID, 
loggerInfo,
        }
        if log.Log(log.Warning) {
                result.DurationHistogram.Print(r.Out, "Aggregated Function 
Time")
@@ -647,36 +651,45 @@
                        r.AccessLogger.Report(id, fStart.UnixNano(), latency)
                }
                funcTimes.Record(latency)
-               i++
                // if using QPS / pre calc expected call # mode:
                if useQPS { // nolint: nestif
-                       if (useExactly || hasDuration) && i >= numCalls {
-                               break // expected exit for that mode
-                       }
-                       elapsed := time.Since(start)
-                       var targetElapsedInSec float64
-                       if hasDuration {
-                               // This next line is tricky - such as for 2s 
duration and 1qps there is 1
-                               // sleep of 2s between the 2 calls and for 3qps 
in 1sec 2 sleep of 1/2s etc
-                               targetElapsedInSec = (float64(i) + 
float64(i)/float64(numCalls-1)) / perThreadQPS
-                       } else {
-                               // Calculate the target elapsed when in endless 
execution
-                               targetElapsedInSec = float64(i) / perThreadQPS
-                       }
-                       targetElapsedDuration := 
time.Duration(int64(targetElapsedInSec * 1e9))
-                       sleepDuration := targetElapsedDuration - elapsed
-                       if r.Jitter {
-                               sleepDuration += getJitter(sleepDuration)
-                       }
-                       log.Debugf("%s target next dur %v - sleep %v", tIDStr, 
targetElapsedDuration, sleepDuration)
-                       sleepTimes.Record(sleepDuration.Seconds())
-                       select {
-                       case <-runnerChan:
-                               break MainLoop
-                       case <-time.After(sleepDuration):
-                               // continue normal execution
+                       for {
+                               i++
+                               if (useExactly || hasDuration) && i >= numCalls 
{
+                                       break MainLoop // expected exit for 
that mode
+                               }
+                               var targetElapsedInSec float64
+                               if hasDuration {
+                                       // This next line is tricky - such as 
for 2s duration and 1qps there is 1
+                                       // sleep of 2s between the 2 calls and 
for 3qps in 1sec 2 sleep of 1/2s etc
+                                       targetElapsedInSec = (float64(i) + 
float64(i)/float64(numCalls-1)) / perThreadQPS
+                               } else {
+                                       // Calculate the target elapsed when in 
endless execution
+                                       targetElapsedInSec = float64(i) / 
perThreadQPS
+                               }
+                               targetElapsedDuration := 
time.Duration(int64(targetElapsedInSec * 1e9))
+                               elapsed := time.Since(start)
+                               sleepDuration := targetElapsedDuration - elapsed
+                               if r.NoCatchUp && sleepDuration < 0 {
+                                       // Skip that request as we took too long
+                                       log.LogVf("%s request took too long 
%.04f s, would sleep %v, skipping iter %d", tIDStr, latency, sleepDuration, i)
+                                       continue
+                               }
+                               if r.Jitter {
+                                       sleepDuration += 
getJitter(sleepDuration)
+                               }
+                               log.Debugf("%s target next dur %v - sleep %v", 
tIDStr, targetElapsedDuration, sleepDuration)
+                               sleepTimes.Record(sleepDuration.Seconds())
+                               select {
+                               case <-runnerChan:
+                                       break MainLoop
+                               case <-time.After(sleepDuration):
+                                       // continue normal execution
+                               }
+                               break // NoCatchUp false or sleepDuration > 0
                        }
                } else { // Not using QPS
+                       i++
                        if useExactly && i >= numCalls {
                                break
                        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/fortio-1.25.0/periodic/periodic_test.go 
new/fortio-1.26.0/periodic/periodic_test.go
--- old/fortio-1.25.0/periodic/periodic_test.go 2022-04-04 19:09:59.000000000 
+0200
+++ new/fortio-1.26.0/periodic/periodic_test.go 2022-04-15 22:29:36.000000000 
+0200
@@ -271,16 +271,19 @@
        r.Options().ReleaseRunners()
 }
 
-func TestUniform(t *testing.T) {
+func TestUniformAndNoCatchUp(t *testing.T) {
        var count int64
        var lock sync.Mutex
        c := TestCount{&count, &lock}
-       expected := int64(40)
+       // TODO: make an actual test vs sort of just exercise the code.
+       // also explain why 34 (with nocatchup, 40 without)
+       expected := int64(34)
        o := RunnerOptions{
-               QPS:        100,
-               NumThreads: 4,
-               Duration:   time.Second,
+               QPS:        85,
+               NumThreads: 2,
+               Duration:   2 * time.Second,
                Uniform:    true,
+               NoCatchUp:  true,
        }
        r := NewPeriodicRunner(&o)
        r.Options().MakeRunners(&c)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/fortio-1.25.0/tcprunner/tcprunner.go 
new/fortio-1.26.0/tcprunner/tcprunner.go
--- old/fortio-1.25.0/tcprunner/tcprunner.go    2022-04-04 19:09:59.000000000 
+0200
+++ new/fortio-1.26.0/tcprunner/tcprunner.go    2022-04-15 22:29:36.000000000 
+0200
@@ -16,10 +16,12 @@
 
 import (
        "bytes"
+       "errors"
        "fmt"
        "io"
        "net"
        "sort"
+       "syscall"
        "time"
 
        "fortio.org/fortio/fhttp"
@@ -155,7 +157,7 @@
                        return nil, err
                }
        } else {
-               log.Debugf("Reusing socket %v", conn)
+               log.Debugf("[%d] Reusing socket %+v", c.connID, conn)
        }
        c.socket = nil // because of error returns and single retry
        conErr := conn.SetReadDeadline(time.Now().Add(c.reqTimeout))
@@ -163,10 +165,11 @@
        if c.doGenerate {
                c.req = GeneratePayload(c.connID, c.messageCount) // TODO write 
directly in buffer to avoid generating garbage for GC to clean
        }
+       expectedLen := len(c.req)
        n, err := conn.Write(c.req)
        c.bytesSent = c.bytesSent + int64(n)
        if log.LogDebug() {
-               log.Debugf("wrote %d (%q): %v", n, string(c.req), err)
+               log.Debugf("[%d] wrote %d (%s): %v", c.connID, n, 
fnet.DebugSummary(c.req, 256), err)
        }
        if err != nil || conErr != nil {
                if reuse {
@@ -175,25 +178,36 @@
                        conn.Close()
                        return c.Fetch() // recurse once
                }
-               log.Errf("Unable to write to %v %v : %v", conn, c.dest, err)
+               log.Errf("[%d] Unable to write to %v: %v", c.connID, c.dest, 
err)
                return nil, err
        }
        if n != len(c.req) {
-               log.Errf("Short write to %v %v : %d instead of %d", conn, 
c.dest, n, len(c.req))
+               log.Errf("[%d] Short write to %v: %d instead of %d", c.connID, 
c.dest, n, expectedLen)
                return nil, io.ErrShortWrite
        }
        // assert that len(c.buffer) == len(c.req)
-       n, err = conn.Read(c.buffer)
-       c.bytesReceived = c.bytesReceived + int64(n)
-       if log.LogDebug() {
-               log.Debugf("read %d (%q): %v", n, string(c.buffer[:n]), err)
-       }
-       if n < len(c.req) {
-               return c.buffer[:n], errShortRead
-       }
-       if n > len(c.req) {
-               log.Errf("BUG: read more than possible %d vs %d", n, len(c.req))
-               return c.buffer[:n], errLongRead
+       totalRead := 0
+       for {
+               n, err = conn.Read(c.buffer[totalRead:])
+               if log.LogDebug() {
+                       log.Debugf("[%d] read %d (%s): %v", c.connID, n, 
fnet.DebugSummary(c.buffer[totalRead:totalRead+n], 256), err)
+               }
+               c.bytesReceived = c.bytesReceived + int64(n)
+               totalRead += n
+               if totalRead == expectedLen { // break first, assuming no err, 
so we don't test that for EOF case
+                       break
+               }
+               if err != nil {
+                       log.Errf("[%d] Unable to read: %v", c.connID, err)
+                       if errors.Is(err, io.EOF) || errors.Is(err, 
syscall.ECONNRESET) {
+                               return c.buffer[:totalRead], errShortRead
+                       }
+                       return c.buffer[:totalRead], err
+               }
+               if totalRead > expectedLen {
+                       log.Errf("[%d] BUG: read more than possible +%d to %d 
vs %d", c.connID, n, totalRead, expectedLen)
+                       return c.buffer[:totalRead], errLongRead
+               }
        }
        if !bytes.Equal(c.buffer, c.req) {
                log.Infof("Mismatch between sent %q and received %q", 
string(c.req), string(c.buffer))
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/fortio-1.25.0/tcprunner/tcprunner_test.go 
new/fortio-1.26.0/tcprunner/tcprunner_test.go
--- old/fortio-1.25.0/tcprunner/tcprunner_test.go       2022-04-04 
19:09:59.000000000 +0200
+++ new/fortio-1.26.0/tcprunner/tcprunner_test.go       2022-04-15 
22:29:36.000000000 +0200
@@ -22,6 +22,7 @@
        "testing"
 
        "fortio.org/fortio/fnet"
+       "fortio.org/fortio/log"
 )
 
 func TestTCPRunnerBadDestination(t *testing.T) {
@@ -56,6 +57,36 @@
        if res.SocketCount != res.RunnerResults.NumThreads {
                t.Errorf("%d socket used, expected same as thread# %d", 
res.SocketCount, res.RunnerResults.NumThreads)
        }
+       if res.BytesReceived != res.BytesSent {
+               t.Errorf("Bytes received %d should bytes sent %d", 
res.BytesReceived, res.BytesSent)
+       }
+}
+
+func TestTCPRunnerLargePayload(t *testing.T) {
+       addr := fnet.TCPEchoServer("test-echo-runner", ":0")
+       destination := fmt.Sprintf("tcp://localhost:%d/", 
addr.(*net.TCPAddr).Port)
+
+       opts := RunnerOptions{}
+       opts.QPS = 10
+       opts.Destination = destination
+       opts.Payload = fnet.GenerateRandomPayload(120000)
+       log.SetLogLevel(log.Debug)
+       res, err := RunTCPTest(&opts)
+       if err != nil {
+               t.Error(err)
+               return
+       }
+       totalReq := res.DurationHistogram.Count
+       tcpOk := res.RetCodes[TCPStatusOK]
+       if totalReq != tcpOk {
+               t.Errorf("Mismatch between requests %d and ok %v", totalReq, 
res.RetCodes)
+       }
+       if res.SocketCount != res.RunnerResults.NumThreads {
+               t.Errorf("%d socket used, expected same as thread# %d", 
res.SocketCount, res.RunnerResults.NumThreads)
+       }
+       if res.BytesReceived != res.BytesSent {
+               t.Errorf("Bytes received %d should bytes sent %d", 
res.BytesReceived, res.BytesSent)
+       }
 }
 
 func TestTCPNotLeaking(t *testing.T) {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/fortio-1.25.0/ui/restHandler.go 
new/fortio-1.26.0/ui/restHandler.go
--- old/fortio-1.25.0/ui/restHandler.go 2022-04-04 19:09:59.000000000 +0200
+++ new/fortio-1.26.0/ui/restHandler.go 2022-04-15 22:29:36.000000000 +0200
@@ -136,6 +136,7 @@
        durStr := FormValue(r, jd, "t")
        jitter := (FormValue(r, jd, "jitter") == "on")
        uniform := (FormValue(r, jd, "uniform") == "on")
+       nocatchup := (FormValue(r, jd, "nocatchup") == "on")
        stdClient := (FormValue(r, jd, "stdclient") == "on")
        sequentialWarmup := (FormValue(r, jd, "sequential-warmup") == "on")
        httpsInsecure := (FormValue(r, jd, "https-insecure") == "on")
@@ -173,6 +174,7 @@
                Exactly:     n,
                Jitter:      jitter,
                Uniform:     uniform,
+               NoCatchUp:   nocatchup,
        }
        ro.Normalize()
        uiRunMapMutex.Lock()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/fortio-1.25.0/ui/templates/main.html 
new/fortio-1.26.0/ui/templates/main.html
--- old/fortio-1.25.0/ui/templates/main.html    2022-04-04 19:09:59.000000000 
+0200
+++ new/fortio-1.26.0/ui/templates/main.html    2022-04-15 22:29:36.000000000 
+0200
@@ -46,7 +46,8 @@
     or run until interrupted:<input type="checkbox" name="t" 
onchange="toggleDuration(this)" />
     or run for exactly <input type="text" name="n" size="6" value="" /> calls. 
<br />
     Threads/Simultaneous connections: <input type="text" name="c" size="6" 
value="8" /> <br />
-    Jitter:<input type="checkbox" name="jitter" /> Uniform:<input 
type="checkbox" name="uniform" /><br />
+    Uniform:<input type="checkbox" name="uniform" /> or Jitter:<input 
type="checkbox" name="jitter" /> &nbsp;&nbsp;
+    No Catch-Up (qps is a ceiling): <input type="checkbox" name="nocatchup" 
/><br />
     Percentiles: <input type="text" name="p" size="20" value="50, 75, 90, 99, 
99.9" /> <br />
     Histogram Resolution: <input type="text" name="r" size="8" value="0.0001" 
/> <br />
     Headers: <br />
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/fortio-1.25.0/ui/uihandler.go 
new/fortio-1.26.0/ui/uihandler.go
--- old/fortio-1.25.0/ui/uihandler.go   2022-04-04 19:09:59.000000000 +0200
+++ new/fortio-1.26.0/ui/uihandler.go   2022-04-15 22:29:36.000000000 +0200
@@ -148,6 +148,7 @@
        durStr := r.FormValue("t")
        jitter := (r.FormValue("jitter") == "on")
        uniform := (r.FormValue("uniform") == "on")
+       nocatchup := (r.FormValue("nocatchup") == "on")
        grpcSecure := (r.FormValue("grpc-secure") == "on")
        grpcPing := (r.FormValue("ping") == "on")
        grpcPingDelay, _ := time.ParseDuration(r.FormValue("grpc-ping-delay"))
@@ -194,6 +195,7 @@
                Exactly:     n,
                Jitter:      jitter,
                Uniform:     uniform,
+               NoCatchUp:   nocatchup,
        }
        if mode == run {
                ro.Normalize()

++++++ vendor.tar.gz ++++++

Reply via email to