What about:
COPY ... FROM ... WITH PATTERN 'regexp_pattern'
Where the columns would be matched with the capture groups.
This could handle the quite common case of varying white-space as column
separators:
COPY log (col1, col2, col3) FROM 'log.txt' WITH PATTERN
'^(\S+)\s+(\S+)\s+(\S+)$'
Darafei "Komяpa" Praliaskouski wrote:
> What I would prefer is some new COPY mode like RAW that will just push
> whatever it gets on the stdin/input into the cell on the server side. This
> way it can be proxied by psql, utilize existing infra for passing streams
> and be used in shell
On 5/6/21 7:41 AM, Isaac Morland wrote:
> On Thu, 6 May 2021 at 02:21, Darafei "Komяpa" Praliaskouski
> mailto:m...@komzpa.net>> wrote:
>
>
> What I would prefer is some new COPY mode like RAW that will just
> push whatever it gets on the stdin/input into the cell on the
> server
On Thu, 6 May 2021 at 12:02, Joel Jacobson wrote:
> On Thu, May 6, 2021, at 13:41, Isaac Morland wrote:
>
> Yes! A significant missing feature is “take this arbitrary bucket of bits
> and move it to/from the database from/to this file without modification of
> any kind”. There are all sorts of
On Thu, May 6, 2021, at 13:41, Isaac Morland wrote:
> On Thu, 6 May 2021 at 02:21, Darafei "Komяpa" Praliaskouski
> wrote:
>
>> What I would prefer is some new COPY mode like RAW that will just push
>> whatever it gets on the stdin/input into the cell on the server side. This
>> way it can
On Thu, 6 May 2021 at 02:21, Darafei "Komяpa" Praliaskouski
wrote:
> What I would prefer is some new COPY mode like RAW that will just push
> whatever it gets on the stdin/input into the cell on the server side. This
> way it can be proxied by psql, utilize existing infra for passing streams
>
I have similar problems and what is really needed is a way to get a file
from client side into a server side object that can be dealt with later.
The most popular way is COPY and it is built into the psql tool. In general
it supports \copy wrapper, and there is COPY FROM STDIN. However, it is not
On Wed, May 5, 2021, at 20:45, Tom Lane wrote:
> "Joel Jacobson" mailto:joel%40compiler.org>> writes:
> > I think you misunderstood the problem.
> > I don't want the entire file to be considered a single value.
> > I want each line to become its own row, just a row with a single column.
>
> > So
Joel Jacobson schrieb am 05.05.2021 um 17:30:
> Could it be an idea to exploit the fact that DELIMITER E'\n' is currently an
> error?
>
> ERROR: COPY delimiter cannot be newline or carriage return
>
> That is, to change E'\n' to be a valid delimiter, which would simply read
> each line
>
On Wed, May 5, 2021, at 21:51, Tom Lane wrote:
> Andrew Dunstan mailto:andrew%40dunslane.net>> writes:
> > On 5/5/21 2:45 PM, Tom Lane wrote:
> >> Yeah, that's because of the conversion to "chr". But a regexp
> >> is overkill for that anyway. Don't we have something that will
> >> split on
Andrew Dunstan writes:
> On 5/5/21 2:45 PM, Tom Lane wrote:
>> Yeah, that's because of the conversion to "chr". But a regexp
>> is overkill for that anyway. Don't we have something that will
>> split on simple substring matches?
> Not that I know of. There is split_part but I don't think
On 5/5/21 3:36 PM, Justin Pryzby wrote:
> On Wed, May 05, 2021 at 02:45:41PM -0400, Tom Lane wrote:
>>> I'm currently using the pg_read_file()-hack in a project,
>>> and even though it can read files up to 1GB,
>>> using e.g. regexp_split_to_table() to split on E'\n'
>>> seems to need 4x as much
On Wed, May 05, 2021 at 02:45:41PM -0400, Tom Lane wrote:
> > I'm currently using the pg_read_file()-hack in a project,
> > and even though it can read files up to 1GB,
> > using e.g. regexp_split_to_table() to split on E'\n'
> > seems to need 4x as much memory, so it only
> > works with files
On 5/5/21 2:45 PM, Tom Lane wrote:
> "Joel Jacobson" writes:
>> I think you misunderstood the problem.
>> I don't want the entire file to be considered a single value.
>> I want each line to become its own row, just a row with a single column.
>> So I actually think COPY seems like a perfect
"Joel Jacobson" writes:
> I think you misunderstood the problem.
> I don't want the entire file to be considered a single value.
> I want each line to become its own row, just a row with a single column.
> So I actually think COPY seems like a perfect match for the job,
> since it does precisely
On Wed, May 5, 2021, at 19:34, Isaac Morland wrote:
> Would DELIMITER NULL make sense? The existing values are literal strings so
> NULL fits with that. Do we already have NONE as a keyword somewhere? It's
> listed in the keyword appendix to the documentation but I can't think of
> where it is
On Wed, May 5, 2021, at 19:58, David G. Johnston wrote:
> On Wed, May 5, 2021 at 10:34 AM Isaac Morland wrote:
>> On Wed, 5 May 2021 at 13:23, Chapman Flack wrote:
>>> On 05/05/21 13:02, David G. Johnston wrote:
>>> > Why not just allow: "DELIMITER NONE" to be valid syntax meaning exactly
>>> >
On Wed, May 5, 2021 at 10:34 AM Isaac Morland
wrote:
> On Wed, 5 May 2021 at 13:23, Chapman Flack wrote:
>
>> On 05/05/21 13:02, David G. Johnston wrote:
>> > Why not just allow: "DELIMITER NONE" to be valid syntax meaning exactly
>> > what it says and does exactly what you desire?
>>
>> What
On Wed, 5 May 2021 at 13:23, Chapman Flack wrote:
> On 05/05/21 13:02, David G. Johnston wrote:
> > Why not just allow: "DELIMITER NONE" to be valid syntax meaning exactly
> > what it says and does exactly what you desire?
>
> What would it mean? That you get one column, multiple rows of text
>
On 05/05/21 13:02, David G. Johnston wrote:
> Why not just allow: "DELIMITER NONE" to be valid syntax meaning exactly
> what it says and does exactly what you desire?
What would it mean? That you get one column, multiple rows of text
corresponding to "lines" delimited by something, or that you
On Wed, May 5, 2021 at 8:31 AM Joel Jacobson wrote:
> Could it be an idea to exploit the fact that DELIMITER E'\n' is currently
> an error?
>
>
Why not just allow: "DELIMITER NONE" to be valid syntax meaning exactly
what it says and does exactly what you desire?
David J.
21 matches
Mail list logo