Ah good point. Well, it's possible to break out of the eachLine call... by throwing an exception, although it makes the code a little less elegant obviously.
On Tue, Apr 12, 2016 at 6:27 PM, Gerald Wiltse <jerrywil...@gmail.com> wrote: > Thank you for the response! > > I had it that way when I started. The problem with using > reader.eachLine{} is there is no way to break out after a specific number > of lines have been received (other than using a GroovyRuntimeException, > which is undesirable). > > Ref: > http://stackoverflow.com/questions/9916261/groovy-inputstream-reading-closure-hanging > ) > > I was sad to discover that there's an eachLine{} method, but not a > readLine() method on the reader. In my case (and perhaps many others) > readLine() would cut out the need for the construction of the > BufferedReader and InputStreamReader. > > > > Gerald R. Wiltse > jerrywil...@gmail.com > > > On Tue, Apr 12, 2016 at 12:19 PM, Guillaume Laforge <glafo...@gmail.com> > wrote: > >> You can do an input.withReader { reader -> ... } to have a buffered >> reader on the input stream. >> And with that reader, you can do reader.eachLine { String s -> ... } to >> iterate over all the lines. >> Last interesting nugget, there's also the class >> groovy.io.LineColumnReader potentially, if you're interested in keeping >> track of the position (column and line number) in the file. >> >> Guillaume >> >> On Tue, Apr 12, 2016 at 5:53 PM, Gerald Wiltse <jerrywil...@gmail.com> >> wrote: >> >>> I'm trying to use a "ServerSocket" to receive HTTP messages from a >>> client which is POSTing them as chunked. I just want to capture the text >>> content being posted (plain text). Any input on how to do this better >>> would be welcomed. >>> >>> Here is my existing and very not-elegant solution. When dealing with >>> ServerSocket, one has to handle the headers and chunk barriers manually, >>> and this is what I came up with. I looked at filterline method on the >>> reader, maybe that's part of a solution, i'm not sure. >>> >>> >>> socket.withStreams { input, output -> >>> BufferedReader reader = new BufferedReader(new InputStreamReader(input)) >>> while (currentLineCount < processor.newLineCount) { >>> line = reader.readLine() >>> >>> if (line && line.size() > 3) { >>> processor.processFormats(line) >>> } >>> currentLineCount++ >>> } >>> } >>> >>> >>> Caveats: >>> >>> 1. I have been trying to process line by line to minimize memory >>> impact, rather than buffering the whole collection. I'd like to keep it >>> that way. >>> >>> >>> 2. These 4 Jetty libraries are available on the classpath, so I could >>> leverage them, but can't add other libraries. >>> >>> compile 'org.eclipse.jetty:jetty-server:8.1.2.v20120308' >>> compile 'org.eclipse.jetty:jetty-continuation:8.1.2.v20120308' >>> compile 'org.eclipse.jetty:jetty-io:8.1.2.v20120308' >>> compile 'org.eclipse.jetty:jetty-util:8.1.2.v20120308' >>> >>> I would make the Service and Handler in Jetty, but I can't find any good >>> examples that fit my situation. >>> >>> >>> >>> >>> Gerald R. Wiltse >>> jerrywil...@gmail.com >>> >>> >> >> >> -- >> Guillaume Laforge >> Apache Groovy committer & PMC Vice-President >> Product Ninja & Advocate at Restlet <http://restlet.com> >> >> Blog: http://glaforge.appspot.com/ >> Social: @glaforge <http://twitter.com/glaforge> / Google+ >> <https://plus.google.com/u/0/114130972232398734985/posts> >> > > -- Guillaume Laforge Apache Groovy committer & PMC Vice-President Product Ninja & Advocate at Restlet <http://restlet.com> Blog: http://glaforge.appspot.com/ Social: @glaforge <http://twitter.com/glaforge> / Google+ <https://plus.google.com/u/0/114130972232398734985/posts>