GitHub user ggrohmann edited a comment on the discussion: Help with metadata 
injection for incremental Load

I'm not quite sure, what the issue is, but sounds like you get too many fields 
like for the meta injection of the csv files (thus the step/ resulting pipeline 
is searching for fields in the file, that don't exist)?

A good way to remove a defined set of columns, but to keep all other columns in 
the stream, which you don't want to specify (as they change for every injected, 
created pipeline) is the "select values" step. 

The injection would somewhat look like this (one row for the file name, n rows 
for the fields)
![{A5F7D22A-8849-49FE-87B2-5F0D139AC34B}](https://github.com/user-attachments/assets/6271fbae-357d-4032-82b9-d127efae26c7)

Also, the csv metadata step creates new lines in the stream, in particular one 
row for each input column creates one row with meta information of this row 
(column type...)
Note, that the fields part contains the selected values step (the important 
part ist to tick the "Include unspecified fields"):
![{B48CE45A-776B-42AE-A44E-12769CF8F39E}](https://github.com/user-attachments/assets/fa4fe03b-9ee6-46a8-8485-cf6f281d4f56)
![{42B8DC09-80AD-4A9E-B2F4-BED86E150E63}](https://github.com/user-attachments/assets/43aeaeab-e8f9-4c1b-a6f3-1cbf92bec564)

Hope that helps

GitHub link: 
https://github.com/apache/hop/discussions/5208#discussioncomment-12907363

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to