Re: Issue with PutMongoProcessor using upsert mode.

2018-07-11 Thread Yves HAMEL
Hi,
Ok I think I have post to quickly.

So I run more test.

UPSERT document with Operator Enable
test1 : upsert the document {"id1":1,$set{"value":"abc"}}, PutMongo
updateQueryKey=id1 => ok
test2 : upsert the document {"id1":1,"id2":1,$set:{"value":"abc"}},
PutMongo updateQueryKey=id1,id2
 line 220 : removeUpdateKeys() doen't remove anything because this
method deals only with property with daot (p1.p2)
 line 229 : update.remove() doesn't remove anything because trying to
remove many keys from the map
 So the update parameter send to mongodb is :
{"id1":1,"id2":1,$set:{"value":"abc"}} and mongo throws Invalid BSON field
name id1 exception
test3 :  upsert the document {"ids.id1":1,"value":"abc"}, PutMongo
updateQueryKey=ids.id1  => ok
test4 :  upsert the document {"ids.id1":1,"ids.id2":1,"value":"abc"},
PutMongo updateQueryKey=ids.id1,ids.id2 => ok
test5 :  upsert the document {"id1":1,"ids.id2":1,"value":"abc"}, PutMongo
updateQueryKey=id1,ids.id2
 line 220 : removeUpdateKeys() remove "ids.id2" from the document
 ine 229 : update.remove() doesn't remove anything because trying to
remove many keys from the map
 So the update parameter send to mongodb is :
{"id1":1,$set:{"value":"abc"}} and mongo throws Invalid BSON field name id1
exception

UPSERT document With whole document
test 6 :  upsert the document {"id1":1,"value":"abc"}, PutMongo
updateQuertKey=id1 => ok
test 7 :  upsert the document {"id1":1,"id2":1,"value":"abc"}, PutMongo
updateQuertKey=id1;id2 => ok
test 8 :  upsert the document {"ids.id1":1,"value":"abc"}, PutMongo
updateQueryKey=ids.id1
query : {ids.id1=1}   doc : {value=abc}
inserted : { "_id" : ObjectId("5b44c5aeab8179ca72466a2f"),
"value" : "abc" }
test 9 :  upsert the document {"ids.id1":1,"ids.id2":1,"value":"abc"},
PutMongo updateQueryKey=ids.id1,ids.id2
query : {ids.id1=1, ids.id2=1}  doc : {value=abc}
inserted : { "_id" : ObjectId("5b44c51cab8179ca7246648c"),
"value" : "abc" }
test 10 :  upsert the document {"id1":1,"ids.id2":1,"value":"abc"},
PutMongo updateQueryKey=id1,ids.id2
query : {id1=1, value=abc}, doc : {id1=1, value=abc}
inserted :{ "_id" : ObjectId("5b44c44bab8179ca72465c96"),
"id1" : 1, "value" : "abc" }

I think we should distinguish 2 update mode :
- first use case : using updateQuery (not empty). In this case the
query and the doc (flowfile) should be transmit to mongoDB without any
change.
- second use case : using udateQueryKey : the processor have to build
the query  based on the document and updateQueryKey paramter: then we have
2 mode :
- WithOperatorEnable : the doc (flowfile) have the following form
{"key":"a","keys.k1":"b",,$set:{"v1":"abc","v.v1":"abcd"},$unset:{}}
the updateQuertKey parameter :
key,keys.k1
so the 
processor have to
build the query parameter : {"key":"a","keys.k1":"b"} and remove the keys
from de document.
- WithWholeDocument : the json doc is
{"key":"a","keys":{"k1":"b","k2":"c"},"v1":"abc","v":{"v1":"abcd"}...}
the 
updateQuertKey
parameter : key,keys.k1
so the 
processor have to
extract the valurs of the different keys and build the
query :{"key":"a","keys.k1":"b"}, the doc remains unchanged.

I update the code of my custom processor, it works.
Here is the java code part I have modified for putMongoProcessor.  (See
attached file: PutMongo-part.java)
What do you think about that ?

Thanks



(Embedded image moved to file: pic04966.jpg)

Yves HAMEL
Direction Digital & Systèmes d’Information Groupe
LM_DATA

MACIF - 2 et 4, rue Pied de Fond - 79037 Niort cedex 9
Tél. : +33 (0)5 49 09 36 06
Email : mon.em...@macif.fr / Pré Doyen 2 – bureau 999

www.macif.fr - Appli présente sur Google Play Store & Apple Store
(Embedded image moved to file: pic07376.jpg)




De :Mike Thomsen 
A : users@nifi.apache.org,
Date :  09/07/2018 17:10
Objet : Re: Issue with PutMongoProcessor using upsert mode.



removeUpdateKeys should be skipping your key because it's a simple key.

private void removeUpdateKeys(String updateKeyParam, Map doc) {
    String[] parts = updateKeyParam.split(",[\\s]*");
    for (String part : parts) {
    if (part.contains(".")) {
    doc.remove(part);
    }
    }
}

Are you sure the value is just identifiant and not something like something
.identifiant?

On Mon, Jul 9, 2018 at 9:42 AM Yves HAMEL  wrote:

  Hi,
  I try to "upsert" document in a mongodb collection.
  When the document exists, PutMongo processor update the document but
  when
  the document doesn't exist the processor do not create it.
  Here is the setting of my PutMongo processor :
          Mode : updateg
          Upser

Re: Reliersoft jdbc driver for nifi salesforce integration

2018-07-11 Thread Pierre Villard
Hi Vitaly,

Can you share the configuration of your processor? Can you switch the log
level to debug for this processor? Do you have input relationships on the
processor (from other processor? failure?)?

Thanks
Pierre

2018-07-10 18:11 GMT+02:00 Vitaly Krivoy :

> Has anyone tried to use Reliersoft jdbc driver to pull data from
> Salesforce using ExecuteSQL processor? I’ve created an object Pricipal__c
> in Salesforce which contains three records.
> I can run “select * from Pricipal__c”  in DbVisualizer and it works as
> expected. But I get neither data nor errors when I run the same SQL query
> from ExecuteSQL. The output from the nifi log shows that Reliersoft jdbc
> driver correctly connects and finds 3 records, but for some reason nothing
> is placed into flow files. Does anyone have any clues or recommendations?
> Thanks.
>
>
>
>
>
> 10:33:28,396 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.Driver Connect options: url: jdbc:sforce://
> login.salesforce.com
>
> 2018-07-10 10:33:28,758 INFO [Flow Service Tasks Thread-2]
> o.a.nifi.controller.StandardFlowService Saved flow controller
> org.apache.nifi.controller.FlowController@f5ebc9a // Another save pending
> = false
>
> 2018-07-10 10:33:30,361 INFO [Timer-Driven Process Thread-6] rc.Y [78
> 810458350 1768753242 true true true] Communicating with SF server took 870
> ms
>
> 2018-07-10 10:33:30,376 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.f Partner server url:
> https://na53.salesforce.com/services/Soap/u/35.0/00Df201Ga7I
>
> 2018-07-10 10:33:30,376 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.f Partner session Id: 00Df201Ga7I!
> ASAAQDroAcrkJz3eYsqA4Vjzb..V2CoFwQaeMru8Xqg5mZcSdQHaN4kWS
> p1Czisn1ZSV0hvTQFEdPlwv04kznPOYn58V.E_y
>
> 2018-07-10 10:33:33,010 INFO [Timer-Driven Process Thread-6] rc.Y [78
> 1326156447 1016293809 true true true] Communicating with SF server took 399
> ms
>
> 2018-07-10 10:33:33,073 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.util.a Metadata is up to date, having last
> been updated on Fri Jul 06 17:04:06 EDT 2018.
>
> 2018-07-10 10:33:33,081 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.f Disconnecting...
>
> 2018-07-10 10:33:33,238 INFO [Timer-Driven Process Thread-6] rc.Y [78
> 150728874 1016293809 true true true] Communicating with SF server took 126
> ms
>
> 2018-07-10 10:33:33,238 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.f Disconnected.
>
> 2018-07-10 10:33:33,241 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.Driver Connect options: url: jdbc:sforce://
> login.salesforce.com
>
> 2018-07-10 10:33:34,952 INFO [Timer-Driven Process Thread-6] rc.Y [78
> 1795241438 -913733887 true true true] Communicating with SF server took
> 1432 ms
>
> 2018-07-10 10:33:34,953 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.f Partner server url:
> https://na53.salesforce.com/services/Soap/u/35.0/00Df201Ga7I
>
> 2018-07-10 10:33:34,953 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.f Partner session Id: 00Df201Ga7I!ASAAQM8pi_
> NGN4gPpJVZENNgLyy.jbYBza3vrAupKVVeFhBPxCw_VmDifRuHQ2zhUXS1Q5dUavbMyWkMQa
> ItdXrIfAdg3owg
>
> 2018-07-10 10:33:37,037 INFO [Timer-Driven Process Thread-6] rc.Y [78
> 287958716 -1252905073 true true true] Communicating with SF server took 378
> ms
>
> 2018-07-10 10:33:37,038 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.util.a Metadata is up to date, having last
> been updated on Fri Jul 06 17:04:06 EDT 2018.
>
> 2018-07-10 10:33:37,099 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.p Calling getColumns(null, null, PRICIPAL__C,
> null) method
>
> 2018-07-10 10:33:37,124 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.w No more records to fetch from back-end
>
> 2018-07-10 10:33:37,130 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.v PREPARED SELECT TO EXECUTE - select * from
> Pricipal__c
>
> 2018-07-10 10:33:37,132 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.p Calling getColumns(null, null, PRICIPAL__C,
> null) method
>
> 2018-07-10 10:33:37,136 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.w No more records to fetch from back-end
>
> 2018-07-10 10:33:37,139 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.v SELECT WITH SET PARAMETERS TO EXECUTE - SELECT
> Id, OwnerId, IsDeleted, Name, CreatedDate, CreatedById, LastModifiedDate,
> LastModifiedById, SystemModstamp, LastViewedDate, LastReferencedDate,
> First_Name__c, Last_Name__c, Email__c FROM Pricipal__c
>
> 2018-07-10 10:33:37,139 INFO [Timer-Driven Process Thread-6]
> com.reliersoft.sforce.jdbc.y EXECUTE QUERY FOR SQL - SELECT Id, OwnerId,
> IsDeleted, Name, CreatedDate, CreatedById, LastModifiedDate,
> LastModifiedById, SystemModstamp, LastViewedDate, LastReferencedDate,
> First_Name__c, Last_Name__c, Email__c FROM Pricipal__c U

RE: [EXT] Adding a file to a zip file

2018-07-11 Thread Peter Wicks (pwicks)
Hi Kiran,

In your flow, how do you avoid duplicate files going into MergeContent?

For example:

  1.  file1.zip goes into Unpack zip file, it contains 5 files.
  2.  These 5 files are sent down both success paths (AttributeToJSON and 
increment fragment index and count)
  3.  5 files show up at Merge Content, and are waiting for that 1 file.
  4.  Meanwhile 5 files show up to AttrigutesToJSON, and then have their 
fragment.index set to 1…
  5.  10 files are now available to MergeContent with the same Fragment 
Identifier and covering all required indexes…

Is this not happening?

Thanks,
  Peter

From: Kiran [mailto:kiran@protonmail.com]
Sent: Tuesday, July 10, 2018 2:36 PM
To: users 
Subject: [EXT] Adding a file to a zip file

Hello,

I've got a requirement to add a JSON file to an existing zip file.

I'm doing this by:

  1.  Unpacking the ZIP file
  2.  Increment the fragment.index and fragment.count of the original files
  3.  Create the JSON file and set the fragment.index to 1 and set the 
fragment.count
  4.  Merge the contents of the files to create the resulting ZIP file
I've attached an image of the data flow and the settings for the MergeContent 
processor.

When I process the ZIP files one by one this works fine but when I process the 
ZIP files in bulk some work and others fail the MergeContent processor. I'm 
guessing that it's to do with the settings of the MergeContent processor. Can 
anyone provide me with insight on what I'm doing wrong here?

Thanks

Kiran



Re: [EXT] Adding a file to a zip file

2018-07-11 Thread Mark Payne
Kiran,

What do you have set for the "Maximum number of Bins" property of MergeContent?
Each 'zip bundle' will have all of the FlowFiles added to the same bucket.
So if you have more 'zip bundles' coming in than you have available buckets,
it will evict one of the bins before all of its FlowFiles have arrived. I 
suspect this
is your issue. If so, you can probably increase the number of available bins to
take care of this.

Thanks
-Mark



From: Kiran [mailto:kiran@protonmail.com]
Sent: Tuesday, July 10, 2018 2:36 PM
To: users mailto:users@nifi.apache.org>>
Subject: [EXT] Adding a file to a zip file

Hello,

I've got a requirement to add a JSON file to an existing zip file.

I'm doing this by:

  1.  Unpacking the ZIP file
  2.  Increment the fragment.index and fragment.count of the original files
  3.  Create the JSON file and set the fragment.index to 1 and set the 
fragment.count
  4.  Merge the contents of the files to create the resulting ZIP file

I've attached an image of the data flow and the settings for the MergeContent 
processor.

When I process the ZIP files one by one this works fine but when I process the 
ZIP files in bulk some work and others fail the MergeContent processor. I'm 
guessing that it's to do with the settings of the MergeContent processor. Can 
anyone provide me with insight on what I'm doing wrong here?

Thanks

Kiran



Re: [EXT] Adding a file to a zip file

2018-07-11 Thread Kiran
Mark,

Thank you that was the issue and it's all working fine now.

Thanks

‐‐‐ Original Message ‐‐‐
On 11 July 2018 3:32 PM, Mark Payne  wrote:

> Kiran,
>
> What do you have set for the "Maximum number of Bins" property of 
> MergeContent?
> Each 'zip bundle' will have all of the FlowFiles added to the same bucket.
> So if you have more 'zip bundles' coming in than you have available buckets,
> it will evict one of the bins before all of its FlowFiles have arrived. I 
> suspect this
> is your issue. If so, you can probably increase the number of available bins 
> to
> take care of this.
>
> Thanks
> -Mark
>
>> From: Kiran [mailto:kiran@protonmail.com]
>> Sent: Tuesday, July 10, 2018 2:36 PM
>> To: users 
>> Subject: [EXT] Adding a file to a zip file
>>
>> Hello,
>>
>> I've got a requirement to add a JSON file to an existing zip file.
>>
>> I'm doing this by:
>>
>> - Unpacking the ZIP file
>> - Increment the fragment.index and fragment.count of the original files
>> - Create the JSON file and set the fragment.index to 1 and set the 
>> fragment.count
>> - Merge the contents of the files to create the resulting ZIP file
>>
>> I've attached an image of the data flow and the settings for the 
>> MergeContent processor.
>>
>> When I process the ZIP files one by one this works fine but when I process 
>> the ZIP files in bulk some work and others fail the MergeContent processor. 
>> I'm guessing that it's to do with the settings of the MergeContent 
>> processor. Can anyone provide me with insight on what I'm doing wrong here?
>>
>> Thanks
>>
>> Kiran