Re: UUID processor handling of empty string

2016-04-17 Thread Susmit Shukla
Hi Erick/Jack,

I agree that "Your code is violating the contract for the UUID update
processor." so index could be in bad state. I have already put the fix and
no further action needed. I was just curious about the resulting behavior.

For completeness here were my results -
indexed 2 docs with these fields - using solr console's document tab


doc1: {"id":""}
doc2: {"id":""}

matchAllDocs query
q=*:*=id+desc
"numFound": 2, "start": 0, "docs": [ {"id":
"9542901e-ede3-46dc-af6c-c30025c7b417"}, {"id":
"f29fcb97-ef5e-4c3e-b4fe-f50a963f894d"} ]

q=*:*=id+asc - no change in order
"numFound": 2, "start": 0, "docs": [ {"id":
"9542901e-ede3-46dc-af6c-c30025c7b417"}, {"id":
"f29fcb97-ef5e-4c3e-b4fe-f50a963f894d"} ]

doc1: {"id":"whatever"}
got error:
"error": { "msg": "Invalid UUID String: 'whatever'",

doc1: {"_version_":-1} - id field is omitted but atleast one field needed
to index
doc2: {"_version_":-1}

matchAllDocs query
q=*:*=id+desc
"numFound": 2, "start": 0, "docs": [ {"id": "
c4e19489-fad1-42f4-b216-88ba550f3d16"}, {"id": "
99d652b8-3eb6-4a9f-a722-33246e8553d4"} ]

q=*:*=id+asc - works
"numFound": 2, "start": 0, "docs": [ {"id": "
99d652b8-3eb6-4a9f-a722-33246e8553d4"}, {"id": "
c4e19489-fad1-42f4-b216-88ba550f3d16"} ]

On Sat, Apr 16, 2016 at 8:01 PM, Erick Erickson 
wrote:

> I did a quick experiment (admittedly with 5.x, but even if this is a
> bug it won't be back-ported to 5.3) and this works exactly as I
> expect. I have three docs with IDs as follows
> doc1: . This is equivalent to your ""
> doc2: whatever
> doc3:
>
> As expected, when the output comes back doc1 has an empty field, doc2
> has "whatever" and doc3 has a newly-generated uuid that happens to
> start with "f".
>
> Adding =id asc returns:
> doc1: (empty string)
> doc3: fblahblah
> doc2: whatever
>
> Adding =id desc returns
> doc2: whatever
> doc3: fblahblah
> doc1:(empty string)
>
> So for about the third time, "what do you mean by 'doesn't work'?"
> Provide simple example date (just how you specify the "id" field is
> sufficient). Provide the requests you're using. Point out what's not
> as you expect.
>
> You might want to review:
> http://wiki.apache.org/solr/UsingMailingLists
>
> Best,
> Erick
>
> On Sat, Apr 16, 2016 at 9:54 AM, Jack Krupansky
>  wrote:
> > Remove that line of code from your client, or... add the remove blank
> field
> > update processor as Hoss suggested. Your code is violating the contract
> for
> > the UUID update processor. An empty string is still a value, and the
> > presence of a value is an explicit trigger to suppress the UUID update
> > processor.
> >
> > -- Jack Krupansky
> >
> > On Sat, Apr 16, 2016 at 12:41 PM, Susmit Shukla  >
> > wrote:
> >
> >> I am seeing the UUID getting generated when I set the field as empty
> string
> >> like this - solrDoc.addField("id", ""); with solr 5.3.1 and based on the
> >> above schema.
> >> The resulting documents in the index are searchable but not sortable.
> >> Someone could verify if this bug exists and file a jira.
> >>
> >> Thanks,
> >> Susmit
> >>
> >>
> >>
> >> On Sat, Apr 16, 2016 at 8:56 AM, Jack Krupansky <
> jack.krupan...@gmail.com>
> >> wrote:
> >>
> >> > "UUID processor factory is generating uuid even if it is empty."
> >> >
> >> > The processor will generate the UUID only if the id field is not
> >> specified
> >> > in the input document. Empty value and value not present are not the
> same
> >> > thing.
> >> >
> >> > So, please clarify your specific situation.
> >> >
> >> >
> >> > -- Jack Krupansky
> >> >
> >> > On Thu, Apr 14, 2016 at 7:20 PM, Susmit Shukla <
> shukla.sus...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hi Chris/Erick,
> >> > >
> >> > > Does not work in the sense the order of documents does not change on
> >> > > changing sort from asc to desc.
> >> > > This could be just a trivial bug where UUID processor factory is
> >> > generating
> >> > > uuid even if it is empty.
> >> > > This is on solr 5.3.0
> >> > >
> >> > > Thanks,
> >> > > Susmit
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > On Thu, Apr 14, 2016 at 2:30 PM, Chris Hostetter <
> >> > hossman_luc...@fucit.org
> >> > > >
> >> > > wrote:
> >> > >
> >> > > >
> >> > > > I'm also confused by what exactly you mean by "doesn't work" but a
> >> > > general
> >> > > > suggestion you can try is putting the
> >> > > > RemoveBlankFieldUpdateProcessorFactory before your UUID
> Processor...
> >> > > >
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> https://lucene.apache.org/solr/6_0_0/solr-core/org/apache/solr/update/processor/RemoveBlankFieldUpdateProcessorFactory.html
> >> > > >
> >> > > > If you are also worried about strings that aren't exactly empty,
> but
> >> > > > consist only of whitespace, you can put
> >> TrimFieldUpdateProcessorFactory
> >> > > > before RemoveBlankFieldUpdateProcessorFactory ...
> >> > > >
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> 

Re: Verifying - SOLR Cloud replaces load balancer?

2016-04-17 Thread John Bickerstaff
Thanks, so on the matter of indexing -- while I could isolate a cloud
replica from queries by not including it in the load balancer's list...

... I cannot isolate any of the replicas from an indexing perspective by a
similar strategy because the SOLR leader decides who does indexing?  Or do
all "nodes" index the same incoming document independently?

Now that I know I still need a load balancer, I guess I'm trying to find a
way to keep indexing load off servers that are busy serving search
results...  Possibly by having one or two servers just handle indexing...

Perhaps I'm looking in the wrong direction though -- and should just spin
up more replicas to handle more indexing load?
On Apr 17, 2016 10:46 PM, "Walter Underwood"  wrote:

No, Zookeeper is used for managing the locations of replicas and the leader
for indexing. Queries should still be distributed with a load balancer.

Queries do NOT go through Zookeeper.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)


> On Apr 17, 2016, at 9:35 PM, John Bickerstaff 
wrote:
>
> My prior use of SOLR in production was pre SOLR cloud.  We put a
> round-robin  load balancer in front of replicas for searching.
>
> Do I understand correctly that a load balancer is unnecessary with SOLR
> Cloud?  I. E. -- SOLR and Zookeeper will balance the load, regardless of
> which replica's URL is getting hit?
>
> Are there any caveats?
>
> Thanks,


Re: Verifying - SOLR Cloud replaces load balancer?

2016-04-17 Thread Walter Underwood
No, Zookeeper is used for managing the locations of replicas and the leader for 
indexing. Queries should still be distributed with a load balancer.

Queries do NOT go through Zookeeper.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)


> On Apr 17, 2016, at 9:35 PM, John Bickerstaff  
> wrote:
> 
> My prior use of SOLR in production was pre SOLR cloud.  We put a
> round-robin  load balancer in front of replicas for searching.
> 
> Do I understand correctly that a load balancer is unnecessary with SOLR
> Cloud?  I. E. -- SOLR and Zookeeper will balance the load, regardless of
> which replica's URL is getting hit?
> 
> Are there any caveats?
> 
> Thanks,



Verifying - SOLR Cloud replaces load balancer?

2016-04-17 Thread John Bickerstaff
My prior use of SOLR in production was pre SOLR cloud.  We put a
round-robin  load balancer in front of replicas for searching.

Do I understand correctly that a load balancer is unnecessary with SOLR
Cloud?  I. E. -- SOLR and Zookeeper will balance the load, regardless of
which replica's URL is getting hit?

Are there any caveats?

Thanks,


block join rollups

2016-04-17 Thread Yonik Seeley
Hey folks, we're at the point of figuring out the API for block join
child rollups for the JSON Facet API.
We already have simple block join faceting:
http://yonik.com/solr-nested-objects/
So now we need an API to carry over more information from children to
parents (say rolling up average rating of all the reviews to the
corresponding parent book objects).

I've gathered some of my notes/thoughts on the API here:
https://issues.apache.org/jira/browse/SOLR-8998

Feedback welcome, and we can discuss here in this thread rather than
cluttering the JIRA.

-Yonik


Re: Adding a new shard

2016-04-17 Thread Erick Erickson
bq: So inorder for me  move the shards to its own instances, I will have to
take a down time and move the newly created shards & replicas to its own
instances.

No, this is not true.

The easiest way to move things around is use the collections API
ADDREPLICA command after splitting.

Let's call this particular shard S1 on machine M1, and the results of
the SPLITSHARD command S1.1 and S1.2 Further, let's say that your goal
is to move _one_ of the subshards from machine M1 to M2

So the sequence is:

1> issue SPLITSHARD and wait for it to complete. This requires no
downtime and after the split the old shard becomes inactive and the
two new subshards are servicing all requests. I'd probably stop
indexing during this operation just to be on the safe side, although
that's not necessary. So now you have both S1.1 and S1.2 running on M1

2> Use the ADDREPLICA command to add a replica of S1.2 to M2. Again,
no downtime required. Wait until the new replica is "active", at which
point it's fully operational. So now we have S1.1 and S1.2 running on
M1 and S1.2.2 running on M2.

3> Use the DELETEREPLICA command to remove S1.2 from M1. Now you have
S1.1 running on M1 and S1.2.1 running on M2. No downtime during any of
this.

4> You should be able to delete S1 now from M1 just to tidy up.

5> Repeat for the other shards.

Best,
Erick


On Sun, Apr 17, 2016 at 3:09 PM, Jay Potharaju  wrote:
> Erik thanks for the reply. In my current prod setup I anticipate the number
> of documents to grow almost 5 times by the end of the year and therefore
> planning on how to scale when required. We have high query volume and
> growing dataset, that is why would like to scale by sharding & replication.
>
> In my dev sandbox, I have 2 replicas & 2 shards created using compositeId
> as my routing option. If I split the shard, it will create 2 new shards on
> each the solr instances including replicas and my request will start going
> to the new shards.
> So inorder for me  move the shards to its own instances, I will have to
> take a down time and move the newly created shards & replicas to its own
> instances. Is that a correct interpretation of the how shard splitting
> would work
>
>  I was hoping that solr will automagically split the existing shard &
> create replicas on the new instances  rather than the existing nodes. That
> is why I said the current shard splitting will not work for me.
> Thanks
>
> On Sat, Apr 16, 2016 at 8:08 PM, Erick Erickson 
> wrote:
>
>> Why don't you think splitting the shards will do what you need?
>> Admittedly it will have to be applied to each shard and will
>> double the number of shards you have, that's the current
>> limitation. At the end, though, you will have 4 shards when
>> you used to have 2 and you can move them around to whatever
>> hardware you can scrape up.
>>
>> This assumes you're using the default compositeId routing
>> scheme and not implicit routing. If you are using compositeId
>> there is no provision to add another shard.
>>
>> As far as SOLR-5025 is concerned, nobody's working on that
>> that I know of.
>>
>> I have to ask though whether you've tuned your existing
>> machines. How many docs are on each? Why do you think
>> you need more shards? Query speed? OOMs? Java heaps
>> getting too big?
>>
>> Best,
>> Erick
>>
>> On Fri, Apr 15, 2016 at 10:50 PM, Jay Potharaju 
>> wrote:
>> > I found ticket https://issues.apache.org/jira/browse/SOLR-5025 which
>> talks
>> > about sharding in solrcloud. Are there any plans to address this issue in
>> > near future?
>> > Can any of the users on the forum comment how they are handling this
>> > scenario in production?
>> > Thanks
>> >
>> > On Fri, Apr 15, 2016 at 4:28 PM, Jay Potharaju 
>> > wrote:
>> >
>> >> Hi,
>> >> I have an existing collection which has 2 shards, one on each node in
>> the
>> >> cloud. Now I want to split the existing collection into 3 shards
>> because of
>> >> increase in volume of data. And create this new shard  on a new node in
>> the
>> >> solrCloud.
>> >>
>> >>  I read about splitting a shard & creating a shard, but not sure it will
>> >> work.
>> >>
>> >> Any suggestions how are others handling this scenario in production.
>> >> --
>> >> Thanks
>> >> Jay
>> >>
>> >>
>> >
>> >
>> >
>> > --
>> > Thanks
>> > Jay Potharaju
>>
>
>
>
> --
> Thanks
> Jay Potharaju


Re: Adding a new shard

2016-04-17 Thread Jay Potharaju
Erik thanks for the reply. In my current prod setup I anticipate the number
of documents to grow almost 5 times by the end of the year and therefore
planning on how to scale when required. We have high query volume and
growing dataset, that is why would like to scale by sharding & replication.

In my dev sandbox, I have 2 replicas & 2 shards created using compositeId
as my routing option. If I split the shard, it will create 2 new shards on
each the solr instances including replicas and my request will start going
to the new shards.
So inorder for me  move the shards to its own instances, I will have to
take a down time and move the newly created shards & replicas to its own
instances. Is that a correct interpretation of the how shard splitting
would work

 I was hoping that solr will automagically split the existing shard &
create replicas on the new instances  rather than the existing nodes. That
is why I said the current shard splitting will not work for me.
Thanks

On Sat, Apr 16, 2016 at 8:08 PM, Erick Erickson 
wrote:

> Why don't you think splitting the shards will do what you need?
> Admittedly it will have to be applied to each shard and will
> double the number of shards you have, that's the current
> limitation. At the end, though, you will have 4 shards when
> you used to have 2 and you can move them around to whatever
> hardware you can scrape up.
>
> This assumes you're using the default compositeId routing
> scheme and not implicit routing. If you are using compositeId
> there is no provision to add another shard.
>
> As far as SOLR-5025 is concerned, nobody's working on that
> that I know of.
>
> I have to ask though whether you've tuned your existing
> machines. How many docs are on each? Why do you think
> you need more shards? Query speed? OOMs? Java heaps
> getting too big?
>
> Best,
> Erick
>
> On Fri, Apr 15, 2016 at 10:50 PM, Jay Potharaju 
> wrote:
> > I found ticket https://issues.apache.org/jira/browse/SOLR-5025 which
> talks
> > about sharding in solrcloud. Are there any plans to address this issue in
> > near future?
> > Can any of the users on the forum comment how they are handling this
> > scenario in production?
> > Thanks
> >
> > On Fri, Apr 15, 2016 at 4:28 PM, Jay Potharaju 
> > wrote:
> >
> >> Hi,
> >> I have an existing collection which has 2 shards, one on each node in
> the
> >> cloud. Now I want to split the existing collection into 3 shards
> because of
> >> increase in volume of data. And create this new shard  on a new node in
> the
> >> solrCloud.
> >>
> >>  I read about splitting a shard & creating a shard, but not sure it will
> >> work.
> >>
> >> Any suggestions how are others handling this scenario in production.
> >> --
> >> Thanks
> >> Jay
> >>
> >>
> >
> >
> >
> > --
> > Thanks
> > Jay Potharaju
>



-- 
Thanks
Jay Potharaju


Re: ManagedSynonymFilterFactory per core instead of config set?

2016-04-17 Thread Erick Erickson
The managed schema stuff is changing the configset
copy by design, not the individual core's synonyms
so you are correct, you can't at this point have
different synonyms per core if those cores are using
the same configset.

Best,
Erick

On Sun, Apr 17, 2016 at 7:57 AM, Shawn Heisey  wrote:
> On 4/17/2016 8:00 AM, Georg Sorst wrote:
>> Am I doing something wrong, or is this on purpose? Is there a way to manage
>> synonyms per core? Should I use a differente value for $resource for each
>> core?
>
> If you want different configs for each core, don't use configsets.  Each
> core will need its own conf directory with its own config, which can be
> different from every other config.
>
> One of the advancements in SolrCloud is the idea of shared configs.
> Every core (shard replica) in a collection pulls exactly the same config
> from zookeeper.  That config can be shared between multiple collections
> running on multiple servers, so a change in the shared config will
> affect all linked collections once they are reloaded.
>
> The configsets feature (which was first available in Solr 4.8) is
> intended to bring that same shared configuration to systems NOT running
> in cloud mode, but it can only do so for cores on a single machine,
> unlike SolrCloud.  By using configsets, you have told Solr that you
> *want* the exact same config on multiple cores on that server.
>
> Thanks,
> Shawn
>


Re: SOLR-3666

2016-04-17 Thread Shawn Heisey
On 4/15/2016 4:01 PM, Jay Potharaju wrote:
> I am using solrCloud with DIH for indexing my data. Is it possible to get
> status of all my DIH across all nodes in the cloud? I saw this jira ticket
> from couple of years ago.
> https://issues.apache.org/jira/browse/SOLR-3666

Reiterating something Erick said:  DIH is not cloud-aware.

DIH only knows about the node/core it's running on, and cannot give you
status for the rest of the cloud.  The DIH functionality predates
SolrCloud by a long time.  When SolrCloud became reality, functionality
was added so DIH would WORK with SolrCloud, but DIH is still
fundamentally the same feature it was before that time.

DIH should be considered a bridge technology.  It's a contrib module --
the code is included with the rest of Solr, but isn't part of the core
application.

DIH can get you going when your data is in a technology it supports,
like a database ... but a custom multi-threaded application (especially
one written using SolrJ) will probably have better performance, and will
definitely allow you to deal with SolrCloud and your data in any way you
desire.

A fully multi-threaded and cloud-aware DIH, possibly as a separate
application, would be an awesome addition to Solr's ecosystem, but it
would not be a trivial program to write.

Thanks,
Shawn



Re: ManagedSynonymFilterFactory per core instead of config set?

2016-04-17 Thread Shawn Heisey
On 4/17/2016 8:00 AM, Georg Sorst wrote:
> Am I doing something wrong, or is this on purpose? Is there a way to manage
> synonyms per core? Should I use a differente value for $resource for each
> core?

If you want different configs for each core, don't use configsets.  Each
core will need its own conf directory with its own config, which can be
different from every other config.

One of the advancements in SolrCloud is the idea of shared configs. 
Every core (shard replica) in a collection pulls exactly the same config
from zookeeper.  That config can be shared between multiple collections
running on multiple servers, so a change in the shared config will
affect all linked collections once they are reloaded.

The configsets feature (which was first available in Solr 4.8) is
intended to bring that same shared configuration to systems NOT running
in cloud mode, but it can only do so for cores on a single machine,
unlike SolrCloud.  By using configsets, you have told Solr that you
*want* the exact same config on multiple cores on that server.

Thanks,
Shawn



ManagedSynonymFilterFactory per core instead of config set?

2016-04-17 Thread Georg Sorst
Hi list!

Is it possible to set synonyms per core when using
the ManagedSynonymFilterFactory, even when using config sets?

What makes me think that this is not possible is that the synonyms are
stored in
$solr_home/configsets/$config_set/conf/_schema_analysis_synonyms_$resource.json.
So when I add some synonyms to core A (which uses $config_set) and then
create core B (which also uses $config_set), then core B will have the same
synonyms as core A. I have validated this by GETing the synonyms for core B.

Am I doing something wrong, or is this on purpose? Is there a way to manage
synonyms per core? Should I use a differente value for $resource for each
core?

Thanks!
Georg
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!


Re: Set Config API user properties with SolrJ

2016-04-17 Thread Georg Sorst
For further reference: I solved this by using an HTTP-based Solr in the
tests. Look at SolrTestCaseJ4.buildJettyConfig for more info

Georg Sorst  schrieb am Mo., 11. Apr. 2016 um
10:28 Uhr:

> The issue is here:
>
> org.apache.solr.handler.SolrConfigHandler.handleRequestBody()
>
> This method will check the 'httpMethod' of the request. The
> set-user-property call will only be evaluated if the method is POST.
> Apparently, for non-HTTP requests this will never be true.
>
> I'll gladly write an issue / testcase / patch if someone can give me a
> little help.
>
> Georg Sorst  schrieb am So., 10. Apr. 2016 um
> 14:36 Uhr:
>
>> Addendum: Apparently the code works fine with HttpSolrClient, but not
>> with EmbeddedSolrServer (used in our tests).The most recent version I
>> tested this was 5.5.0
>>
>> Georg Sorst  schrieb am So., 10. Apr. 2016 um
>> 01:49 Uhr:
>>
>>> Hi,
>>>
>>> how can you set Config API values from SolrJ? Does anyone have an
>>> example for this?
>>>
>>> Here's what I'm currently trying:
>>>
>>> /* Build the structure for the request */
>>> Map parameters = new HashMap() {{
>>>   put("key", "value");
>>> }};
>>> final NamedList requestParameters = new NamedList<>();
>>> requestParameters.add("set-user-property", parameters);
>>>
>>> /* Build the JSON */
>>> CharArr json = new CharArr();
>>> new SchemaRequestJSONWriter(json).write(requestParameters);
>>> ContentStreamBase.StringStream stringStream = new
>>> ContentStreamBase.StringStream(json.toString());
>>> Collection contentStreams = Collections.
>>> singletonList(stringStream);
>>>
>>> /* Send the request */
>>> GenericSolrRequest request = new
>>> GenericSolrRequest(SolrRequest.METHOD.POST, "/config/overlay", null);
>>> request.setContentStreams(contentStreams);
>>> SimpleSolrResponse response = request.process(new HttpSolrClient("
>>> http://localhost:8983/solr/test;));
>>>
>>> The JSON is looking good, but it's doing... nothing. The response just
>>> contains the default config-overlay contents (znodeVersion). Any idea why?
>>>
>>> Thanks!
>>> Georg
>>> --
>>> *Georg M. Sorst I CTO*
>>> FINDOLOGIC GmbH
>>>
>>>
>>>
>>> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
>>> E.: g.so...@findologic.com
>>> www.findologic.com Folgen Sie uns auf: XING
>>> facebook
>>>  Twitter
>>> 
>>>
>>> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
>>> Hier  Termin
>>> vereinbaren!
>>> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
>>>  Termin
>>> vereinbaren!
>>>
>> --
>> *Georg M. Sorst I CTO*
>> FINDOLOGIC GmbH
>>
>>
>>
>> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
>> E.: g.so...@findologic.com
>> www.findologic.com Folgen Sie uns auf: XING
>> facebook
>>  Twitter
>> 
>>
>> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
>> Hier  Termin
>> vereinbaren!
>> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
>>  Termin
>> vereinbaren!
>>
> --
> *Georg M. Sorst I CTO*
> FINDOLOGIC GmbH
>
>
>
> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
> E.: g.so...@findologic.com
> www.findologic.com Folgen Sie uns auf: XING
> facebook
>  Twitter
> 
>
> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
> Hier  Termin
> vereinbaren!
> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
>  Termin
> vereinbaren!
>
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!


Re: dataimport db-data-config.xml

2016-04-17 Thread Reth RM
 What are the errors reported?  Errors can be either seen on admin page
logging tab or log file under solr_home.
If you follow the steps mentioned on the blog precisely, it should almost
work
http://solr.pl/en/2010/10/11/data-import-handler-%E2%80%93-how-to-import-data-from-sql-databases-part-1/

If you encounter errors at any step, lets us know.




On Sat, Apr 16, 2016 at 10:49 AM, kishor  wrote:

> I am try to run two pgsql query on same data-source. is this possible in
> db-data-config.xml.
>
>
> 
>
>  url="jdbc:postgresql://0.0.0.0:5432/iboats"
> user="iboats"
> password="root" />
>
> 
>  transformer="TemplateTransformer">
>
>  template="user1-${user1.id}"/>
>
>
> This code is not working please suggest any more example
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/dataimport-db-data-config-xml-tp4270673.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: [ANNOUNCE] YCSB 0.8.0 Release

2016-04-17 Thread Guillaume Rossolini
Hi,

FYI there is a ticket about this:
https://github.com/brianfrankcooper/YCSB/issues/637

Regards,

--
Guillaume ROSSOLINI


On 16 April 2016 at 10:35, Alexandre Rafalovitch  wrote:

> This does not actually say what the product IS and trying to follow
> the links gets to the "Yahoo will no long have research division" page
> that make this seem to be the "good bye" release!?!
>
> Just an observation,
> Alex.
> P.s. Global redirect of all blog articles to
>
> https://yahooresearch.tumblr.com/post/139436148571/yahoos-new-research-model
> 
> Newsletter and resources for Solr beginners and intermediates:
> http://www.solr-start.com/
>
>
> On 15 April 2016 at 14:45, Chrisjan Matser 
> wrote:
> > On behalf of the development community, I am pleased to announce the
> > release of YCSB 0.8.0.  Though there were no major Solr updates in this
> > release, we are always interested in having members from the community
> help
> > with ensuring that we have compliance with Solr's latest and greatest.
> >
> > Highlights:
> >
> > * Amazon S3 improvments including proper closing of the S3Object
> >
> > * Apache Cassandra improvements including update to DataStax driver
> 3.0.0,
> > tested with Cassandra 2.2.5
> >
> > * Apache HBase10 improvements including synchronization for
> multi-threading
> >
> > * Core improvements to address future enhancements
> >
> > * Elasticsearch improvements including update to 2.3.1 (latest stable
> > version)
> >
> > * Orientdb improvements including a readallfields fix
> >
> > Full release notes, including links to source and convenience binaries:
> >
> > https://github.com/brianfrankcooper/YCSB/releases/tag/0.8.0
> >
> > This release covers changes from the last 1 month.
>