[twitter-dev] Re: Search queries not working

2009-04-13 Thread Alex Payne

Yes. Queries are limited to 140 characters.

Basha Shaik wrote:

Hi,

Is there any Length Limit in the query I pass in search API?

Regards,

Mahaboob Basha Shaik
www.netelixir.com 
Making Search Work


On Sat, Apr 4, 2009 at 10:27 AM, Basha Shaik 
mailto:basha.neteli...@gmail.com>> wrote:


Hi Chad,
No duplicates are there with this.
Thank You

Regards,

Mahaboob Basha Shaik
www.netelixir.com 
Making Search Work


On Sat, Apr 4, 2009 at 7:29 AM, Basha Shaik
mailto:basha.neteli...@gmail.com>> wrote:

Hi chad,

Thank you. I was trying for a query which has only 55 tweets
and i have kept 100 as rpp . so i was not getting next_page.
when i decreased rpp to 20 and tried i got now. thank you very
much. i Will check if any Duplicates occur with these and let
you know.


Regards,

Mahaboob Basha Shaik
www.netelixir.com 
Making Search Work


On Sat, Apr 4, 2009 at 7:06 AM, Chad Etzel
mailto:jazzyc...@gmail.com>> wrote:

next_page




--
Alex Payne - API Lead, Twitter, Inc.
http://twitter.com/al3x



[twitter-dev] Re: Search queries not working

2009-04-12 Thread Basha Shaik
Hi,

Is there any Length Limit in the query I pass in search API?

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 10:27 AM, Basha Shaik wrote:

> Hi Chad,
> No duplicates are there with this.
> Thank You
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Sat, Apr 4, 2009 at 7:29 AM, Basha Shaik wrote:
>
>> Hi chad,
>>
>> Thank you. I was trying for a query which has only 55 tweets and i have
>> kept 100 as rpp . so i was not getting next_page. when i decreased rpp to 20
>> and tried i got now. thank you very much. i Will check if any Duplicates
>> occur with these and let you know.
>>
>> Regards,
>>
>> Mahaboob Basha Shaik
>> www.netelixir.com
>> Making Search Work
>>
>>
>> On Sat, Apr 4, 2009 at 7:06 AM, Chad Etzel  wrote:
>>
>>> next_page
>>>
>>
>>
>


[twitter-dev] Re: Search queries not working

2009-04-04 Thread Basha Shaik
Hi Chad,
No duplicates are there with this.
Thank You
Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 7:29 AM, Basha Shaik wrote:

> Hi chad,
>
> Thank you. I was trying for a query which has only 55 tweets and i have
> kept 100 as rpp . so i was not getting next_page. when i decreased rpp to 20
> and tried i got now. thank you very much. i Will check if any Duplicates
> occur with these and let you know.
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Sat, Apr 4, 2009 at 7:06 AM, Chad Etzel  wrote:
>
>> next_page
>>
>
>


[twitter-dev] Re: Search queries not working

2009-04-04 Thread Basha Shaik
Hi chad,

Thank you. I was trying for a query which has only 55 tweets and i have kept
100 as rpp . so i was not getting next_page. when i decreased rpp to 20 and
tried i got now. thank you very much. i Will check if any Duplicates occur
with these and let you know.

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 7:06 AM, Chad Etzel  wrote:

> next_page
>


[twitter-dev] Re: Search queries not working

2009-04-04 Thread Chad Etzel

I have not used java in a long time, but there should be a "next_page"
key in the map you create from the json response.  Here is an example
json response with rpp=1 for "hello":

{"results":[{"text":"hello","to_user_id":null,"from_user":"fsas1975","id":1450457219,"from_user_id":6788389,"source":"web<\/a>","profile_image_url":"http:\/\/s3.amazonaws.com\/twitter_production\/profile_images\/117699880\/514HjlKzd1L__AA280__normal.jpg","created_at":"Sat,
04 Apr 2009 06:59:57
+"}],"since_id":0,"max_id":1450457219,"refresh_url":"?since_id=1450457219&q=hello","results_per_page":1,"next_page":"?page=2&max_id=1450457219&rpp=1&q=hello","completed_in":0.013591,"page":1,"query":"hello"}

The part you are interested in is this:
"next_page":"?page=2&max_id=1450457219&rpp=1&q=hello"

you can construct the next page url by appending this value to:
"http://search.twitter.com/search.json";

-Chad


On Sat, Apr 4, 2009 at 2:55 AM, Basha Shaik  wrote:
> Hi i am using java. We parse the json response. and store the value as key -
> value pairs in a Map.
>
> In the reponse no wahere i found next_url or next_page.
> Can you tell me how we can store all json data in a variable.
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>


[twitter-dev] Re: Search queries not working

2009-04-03 Thread Basha Shaik
Hi i am using java. We parse the json response. and store the value as key -
value pairs in a Map.

In the reponse no wahere i found next_url or next_page.
Can you tell me how we can store all json data in a variable.

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 6:41 AM, Chad Etzel  wrote:

>
> My example was in javascript. How are you retrieving the json data?
> What language are you using?
> -chad
>
> On Sat, Apr 4, 2009 at 2:35 AM, Basha Shaik 
> wrote:
> > Hi Chad,
> > how can we store all json data in a variable "jdata".
> > Can you tell me how to do that?
> > I am using java for jason processing
> >
> > Which technology are you using?
> > Regards,
> >
> > Mahaboob Basha Shaik
> > www.netelixir.com
> > Making Search Work
> >
> >
> > On Sat, Apr 4, 2009 at 6:23 AM, Chad Etzel  wrote:
> >>
> >> Sorry, typo previously:
> >>
> >> var next_page_url = "http://search.twitter.com/search.json"; +
> >> jdata.next_page;
> >>
> >> On Sat, Apr 4, 2009 at 2:18 AM, Chad Etzel  wrote:
> >> > Assuming you get the json data somehow and store it in a variable
> >> > called "jdata", you can construct the next page url thus:
> >> >
> >> > var next_page_url = "http://search.twitter.com/"; + jdata.next_page;
> >> >
> >> > -Chad
> >> >
> >> > On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik <
> basha.neteli...@gmail.com>
> >> > wrote:
> >> >> I am using json
> >> >>
> >> >> Regards,
> >> >>
> >> >> Mahaboob Basha Shaik
> >> >> www.netelixir.com
> >> >> Making Search Work
> >> >>
> >> >>
> >> >> On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel 
> wrote:
> >> >>>
> >> >>> Are you using the .atom or .json API feed?  I am only familiar with
> >> >>> the .json feed.
> >> >>> -Chad
> >> >>>
> >> >>> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik
> >> >>> 
> >> >>> wrote:
> >> >>> > Hi Chad,
> >> >>> >
> >> >>> > how can we use "next_page" in the url we request. where can we get
> >> >>> > the
> >> >>> > url
> >> >>> > we need to pass.
> >> >>> >
> >> >>> > Regards,
> >> >>> >
> >> >>> > Mahaboob Basha Shaik
> >> >>> > www.netelixir.com
> >> >>> > Making Search Work
> >> >>> >
> >> >>> >
> >> >>> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel 
> >> >>> > wrote:
> >> >>> >>
> >> >>> >> I'm not sure of these "next_url" and "prev_url" fields (never
> seen
> >> >>> >> them anywhere), but at least in the json data there is a
> >> >>> >> "next_page"
> >> >>> >> field which uses "?page=_&max_id=__" already prefilled for
> you.
> >> >>> >> This should definitely avoid the duplicate tweet issue.  I've
> never
> >> >>> >> had to do any client-side duplicate filtering when using the
> >> >>> >> correct
> >> >>> >> combination of "page","max_id", and "rpp" values...
> >> >>> >>
> >> >>> >> If you give very specific examples (the actual URL data would be
> >> >>> >> handy) where you are seeing duplicates between pages, we can
> >> >>> >> probably
> >> >>> >> help sort this out.
> >> >>> >>
> >> >>> >> -Chad
> >> >>> >>
> >> >>> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams 
> >> >>> >> wrote:
> >> >>> >> >
> >> >>> >> > The use of prev_url and next_url will take care of step 1 from
> >> >>> >> > your
> >> >>> >> > flow described above. Specifically, next_url will give your
> >> >>> >> > application the URI to contact to get the next page of results.
> >> >>> >> >
> >> >>> >> > Combining max_id and next_url usage will not solve the
> duplicate
> >> >>> >> > problem. To overcome that issue, you will have to simply strip
> >> >>> >> > the
> >> >>> >> > duplicate tweets on the client-side.
> >> >>> >> >
> >> >>> >> > Thanks,
> >> >>> >> > Doug Williams
> >> >>> >> > Twitter API Support
> >> >>> >> > http://twitter.com/dougw
> >> >>> >> >
> >> >>> >> >
> >> >>> >> >
> >> >>> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
> >> >>> >> > 
> >> >>> >> > wrote:
> >> >>> >> >> HI,
> >> >>> >> >>
> >> >>> >> >> Can you give me an example how i can use prev_url and next_url
> >> >>> >> >> with
> >> >>> >> >> max_id.
> >> >>> >> >>
> >> >>> >> >>
> >> >>> >> >>
> >> >>> >> >> No I am following below process to search
> >> >>> >> >> 1. Set rpp=100 and retrieve 15 pages search results by
> >> >>> >> >> incrementing
> >> >>> >> >> the param 'page'
> >> >>> >> >> 2. Get the id of the last status on page 15 and set that as
> the
> >> >>> >> >> max_id
> >> >>> >> >> for the next query
> >> >>> >> >> 3. If we have more results, go to step 1
> >> >>> >> >>
> >> >>> >> >> here i got duplicate. 100th record in page 1 was same as 1st
> >> >>> >> >> record
> >> >>> >> >> in
> >> >>> >> >> page
> >> >>> >> >> 2.
> >> >>> >> >>
> >> >>> >> >> I understood the reason why i got the duplicates from matts
> >> >>> >> >> previous
> >> >>> >> >> mail.
> >> >>> >> >>
> >> >>> >> >> Will this problem solve if i use max_id with prev_url and
> >> >>> >> >> next_url?
> >> >>> >> >>  How can the duplicate problem be solved
> >> >>> >> >>
> >> >>> >> >>
> >> >>> >> >> Regards,
> >> >>> >> >>
> >> >>> >> >> Mahaboob Basha Shaik
> >> >>> >> >> 

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Chad Etzel

My example was in javascript. How are you retrieving the json data?
What language are you using?
-chad

On Sat, Apr 4, 2009 at 2:35 AM, Basha Shaik  wrote:
> Hi Chad,
> how can we store all json data in a variable "jdata".
> Can you tell me how to do that?
> I am using java for jason processing
>
> Which technology are you using?
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Sat, Apr 4, 2009 at 6:23 AM, Chad Etzel  wrote:
>>
>> Sorry, typo previously:
>>
>> var next_page_url = "http://search.twitter.com/search.json"; +
>> jdata.next_page;
>>
>> On Sat, Apr 4, 2009 at 2:18 AM, Chad Etzel  wrote:
>> > Assuming you get the json data somehow and store it in a variable
>> > called "jdata", you can construct the next page url thus:
>> >
>> > var next_page_url = "http://search.twitter.com/"; + jdata.next_page;
>> >
>> > -Chad
>> >
>> > On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik 
>> > wrote:
>> >> I am using json
>> >>
>> >> Regards,
>> >>
>> >> Mahaboob Basha Shaik
>> >> www.netelixir.com
>> >> Making Search Work
>> >>
>> >>
>> >> On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel  wrote:
>> >>>
>> >>> Are you using the .atom or .json API feed?  I am only familiar with
>> >>> the .json feed.
>> >>> -Chad
>> >>>
>> >>> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik
>> >>> 
>> >>> wrote:
>> >>> > Hi Chad,
>> >>> >
>> >>> > how can we use "next_page" in the url we request. where can we get
>> >>> > the
>> >>> > url
>> >>> > we need to pass.
>> >>> >
>> >>> > Regards,
>> >>> >
>> >>> > Mahaboob Basha Shaik
>> >>> > www.netelixir.com
>> >>> > Making Search Work
>> >>> >
>> >>> >
>> >>> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel 
>> >>> > wrote:
>> >>> >>
>> >>> >> I'm not sure of these "next_url" and "prev_url" fields (never seen
>> >>> >> them anywhere), but at least in the json data there is a
>> >>> >> "next_page"
>> >>> >> field which uses "?page=_&max_id=__" already prefilled for you.
>> >>> >> This should definitely avoid the duplicate tweet issue.  I've never
>> >>> >> had to do any client-side duplicate filtering when using the
>> >>> >> correct
>> >>> >> combination of "page","max_id", and "rpp" values...
>> >>> >>
>> >>> >> If you give very specific examples (the actual URL data would be
>> >>> >> handy) where you are seeing duplicates between pages, we can
>> >>> >> probably
>> >>> >> help sort this out.
>> >>> >>
>> >>> >> -Chad
>> >>> >>
>> >>> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams 
>> >>> >> wrote:
>> >>> >> >
>> >>> >> > The use of prev_url and next_url will take care of step 1 from
>> >>> >> > your
>> >>> >> > flow described above. Specifically, next_url will give your
>> >>> >> > application the URI to contact to get the next page of results.
>> >>> >> >
>> >>> >> > Combining max_id and next_url usage will not solve the duplicate
>> >>> >> > problem. To overcome that issue, you will have to simply strip
>> >>> >> > the
>> >>> >> > duplicate tweets on the client-side.
>> >>> >> >
>> >>> >> > Thanks,
>> >>> >> > Doug Williams
>> >>> >> > Twitter API Support
>> >>> >> > http://twitter.com/dougw
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
>> >>> >> > 
>> >>> >> > wrote:
>> >>> >> >> HI,
>> >>> >> >>
>> >>> >> >> Can you give me an example how i can use prev_url and next_url
>> >>> >> >> with
>> >>> >> >> max_id.
>> >>> >> >>
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> No I am following below process to search
>> >>> >> >> 1. Set rpp=100 and retrieve 15 pages search results by
>> >>> >> >> incrementing
>> >>> >> >> the param 'page'
>> >>> >> >> 2. Get the id of the last status on page 15 and set that as the
>> >>> >> >> max_id
>> >>> >> >> for the next query
>> >>> >> >> 3. If we have more results, go to step 1
>> >>> >> >>
>> >>> >> >> here i got duplicate. 100th record in page 1 was same as 1st
>> >>> >> >> record
>> >>> >> >> in
>> >>> >> >> page
>> >>> >> >> 2.
>> >>> >> >>
>> >>> >> >> I understood the reason why i got the duplicates from matts
>> >>> >> >> previous
>> >>> >> >> mail.
>> >>> >> >>
>> >>> >> >> Will this problem solve if i use max_id with prev_url and
>> >>> >> >> next_url?
>> >>> >> >>  How can the duplicate problem be solved
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> Regards,
>> >>> >> >>
>> >>> >> >> Mahaboob Basha Shaik
>> >>> >> >> www.netelixir.com
>> >>> >> >> Making Search Work
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams 
>> >>> >> >> wrote:
>> >>> >> >>>
>> >>> >> >>> Basha,
>> >>> >> >>> Pagination is defined well here [1].
>> >>> >> >>>
>> >>> >> >>> The next_url and prev_url fields give your client HTTP URIs to
>> >>> >> >>> move
>> >>> >> >>> forward and backward through the result set. You can use them
>> >>> >> >>> to
>> >>> >> >>> page
>> >>> >> >>> through search results.
>> >>> >> >>>
>> >>> >> >>> I have some work to do on the search docs and I'll add field
>> >>> >> >>> definitions then as well.
>> >>> >> >>>
>> >>> >> >>> 1. http://en.wik

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Basha Shaik
Hi Chad,
how can we store all json data in a variable "jdata".
Can you tell me how to do that?
I am using java for jason processing

Which technology are you using?
Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 6:23 AM, Chad Etzel  wrote:

>
> Sorry, typo previously:
>
> var next_page_url = "http://search.twitter.com/search.json"; +
> jdata.next_page;
>
> On Sat, Apr 4, 2009 at 2:18 AM, Chad Etzel  wrote:
> > Assuming you get the json data somehow and store it in a variable
> > called "jdata", you can construct the next page url thus:
> >
> > var next_page_url = "http://search.twitter.com/"; + jdata.next_page;
> >
> > -Chad
> >
> > On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik 
> wrote:
> >> I am using json
> >>
> >> Regards,
> >>
> >> Mahaboob Basha Shaik
> >> www.netelixir.com
> >> Making Search Work
> >>
> >>
> >> On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel  wrote:
> >>>
> >>> Are you using the .atom or .json API feed?  I am only familiar with
> >>> the .json feed.
> >>> -Chad
> >>>
> >>> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik  >
> >>> wrote:
> >>> > Hi Chad,
> >>> >
> >>> > how can we use "next_page" in the url we request. where can we get
> the
> >>> > url
> >>> > we need to pass.
> >>> >
> >>> > Regards,
> >>> >
> >>> > Mahaboob Basha Shaik
> >>> > www.netelixir.com
> >>> > Making Search Work
> >>> >
> >>> >
> >>> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel 
> wrote:
> >>> >>
> >>> >> I'm not sure of these "next_url" and "prev_url" fields (never seen
> >>> >> them anywhere), but at least in the json data there is a "next_page"
> >>> >> field which uses "?page=_&max_id=__" already prefilled for you.
> >>> >> This should definitely avoid the duplicate tweet issue.  I've never
> >>> >> had to do any client-side duplicate filtering when using the correct
> >>> >> combination of "page","max_id", and "rpp" values...
> >>> >>
> >>> >> If you give very specific examples (the actual URL data would be
> >>> >> handy) where you are seeing duplicates between pages, we can
> probably
> >>> >> help sort this out.
> >>> >>
> >>> >> -Chad
> >>> >>
> >>> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams 
> wrote:
> >>> >> >
> >>> >> > The use of prev_url and next_url will take care of step 1 from
> your
> >>> >> > flow described above. Specifically, next_url will give your
> >>> >> > application the URI to contact to get the next page of results.
> >>> >> >
> >>> >> > Combining max_id and next_url usage will not solve the duplicate
> >>> >> > problem. To overcome that issue, you will have to simply strip the
> >>> >> > duplicate tweets on the client-side.
> >>> >> >
> >>> >> > Thanks,
> >>> >> > Doug Williams
> >>> >> > Twitter API Support
> >>> >> > http://twitter.com/dougw
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
> >>> >> > 
> >>> >> > wrote:
> >>> >> >> HI,
> >>> >> >>
> >>> >> >> Can you give me an example how i can use prev_url and next_url
> with
> >>> >> >> max_id.
> >>> >> >>
> >>> >> >>
> >>> >> >>
> >>> >> >> No I am following below process to search
> >>> >> >> 1. Set rpp=100 and retrieve 15 pages search results by
> incrementing
> >>> >> >> the param 'page'
> >>> >> >> 2. Get the id of the last status on page 15 and set that as the
> >>> >> >> max_id
> >>> >> >> for the next query
> >>> >> >> 3. If we have more results, go to step 1
> >>> >> >>
> >>> >> >> here i got duplicate. 100th record in page 1 was same as 1st
> record
> >>> >> >> in
> >>> >> >> page
> >>> >> >> 2.
> >>> >> >>
> >>> >> >> I understood the reason why i got the duplicates from matts
> previous
> >>> >> >> mail.
> >>> >> >>
> >>> >> >> Will this problem solve if i use max_id with prev_url and
> next_url?
> >>> >> >>  How can the duplicate problem be solved
> >>> >> >>
> >>> >> >>
> >>> >> >> Regards,
> >>> >> >>
> >>> >> >> Mahaboob Basha Shaik
> >>> >> >> www.netelixir.com
> >>> >> >> Making Search Work
> >>> >> >>
> >>> >> >>
> >>> >> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams 
> >>> >> >> wrote:
> >>> >> >>>
> >>> >> >>> Basha,
> >>> >> >>> Pagination is defined well here [1].
> >>> >> >>>
> >>> >> >>> The next_url and prev_url fields give your client HTTP URIs to
> move
> >>> >> >>> forward and backward through the result set. You can use them to
> >>> >> >>> page
> >>> >> >>> through search results.
> >>> >> >>>
> >>> >> >>> I have some work to do on the search docs and I'll add field
> >>> >> >>> definitions then as well.
> >>> >> >>>
> >>> >> >>> 1. 
> >>> >> >>> http://en.wikipedia.org/wiki/Pagination_(web)
> >>> >> >>>
> >>> >> >>> Doug Williams
> >>> >> >>> Twitter API Support
> >>> >> >>> http://twitter.com/dougw
> >>> >> >>>
> >>> >> >>>
> >>> >> >>>
> >>> >> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
> >>> >> >>> 
> >>> >> >>> wrote:
> >>> >> >>> > Hi matt,
> >>> >> >>> >
> >>> >> >>> > Thank You
> >>> >> >>> > What is Pagination? Does it mean that I cannot use 

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Chad Etzel

Sorry, typo previously:

var next_page_url = "http://search.twitter.com/search.json"; + jdata.next_page;

On Sat, Apr 4, 2009 at 2:18 AM, Chad Etzel  wrote:
> Assuming you get the json data somehow and store it in a variable
> called "jdata", you can construct the next page url thus:
>
> var next_page_url = "http://search.twitter.com/"; + jdata.next_page;
>
> -Chad
>
> On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik  wrote:
>> I am using json
>>
>> Regards,
>>
>> Mahaboob Basha Shaik
>> www.netelixir.com
>> Making Search Work
>>
>>
>> On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel  wrote:
>>>
>>> Are you using the .atom or .json API feed?  I am only familiar with
>>> the .json feed.
>>> -Chad
>>>
>>> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik 
>>> wrote:
>>> > Hi Chad,
>>> >
>>> > how can we use "next_page" in the url we request. where can we get the
>>> > url
>>> > we need to pass.
>>> >
>>> > Regards,
>>> >
>>> > Mahaboob Basha Shaik
>>> > www.netelixir.com
>>> > Making Search Work
>>> >
>>> >
>>> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel  wrote:
>>> >>
>>> >> I'm not sure of these "next_url" and "prev_url" fields (never seen
>>> >> them anywhere), but at least in the json data there is a "next_page"
>>> >> field which uses "?page=_&max_id=__" already prefilled for you.
>>> >> This should definitely avoid the duplicate tweet issue.  I've never
>>> >> had to do any client-side duplicate filtering when using the correct
>>> >> combination of "page","max_id", and "rpp" values...
>>> >>
>>> >> If you give very specific examples (the actual URL data would be
>>> >> handy) where you are seeing duplicates between pages, we can probably
>>> >> help sort this out.
>>> >>
>>> >> -Chad
>>> >>
>>> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
>>> >> >
>>> >> > The use of prev_url and next_url will take care of step 1 from your
>>> >> > flow described above. Specifically, next_url will give your
>>> >> > application the URI to contact to get the next page of results.
>>> >> >
>>> >> > Combining max_id and next_url usage will not solve the duplicate
>>> >> > problem. To overcome that issue, you will have to simply strip the
>>> >> > duplicate tweets on the client-side.
>>> >> >
>>> >> > Thanks,
>>> >> > Doug Williams
>>> >> > Twitter API Support
>>> >> > http://twitter.com/dougw
>>> >> >
>>> >> >
>>> >> >
>>> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
>>> >> > 
>>> >> > wrote:
>>> >> >> HI,
>>> >> >>
>>> >> >> Can you give me an example how i can use prev_url and next_url with
>>> >> >> max_id.
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> No I am following below process to search
>>> >> >> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
>>> >> >> the param 'page'
>>> >> >> 2. Get the id of the last status on page 15 and set that as the
>>> >> >> max_id
>>> >> >> for the next query
>>> >> >> 3. If we have more results, go to step 1
>>> >> >>
>>> >> >> here i got duplicate. 100th record in page 1 was same as 1st record
>>> >> >> in
>>> >> >> page
>>> >> >> 2.
>>> >> >>
>>> >> >> I understood the reason why i got the duplicates from matts previous
>>> >> >> mail.
>>> >> >>
>>> >> >> Will this problem solve if i use max_id with prev_url and next_url?
>>> >> >>  How can the duplicate problem be solved
>>> >> >>
>>> >> >>
>>> >> >> Regards,
>>> >> >>
>>> >> >> Mahaboob Basha Shaik
>>> >> >> www.netelixir.com
>>> >> >> Making Search Work
>>> >> >>
>>> >> >>
>>> >> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams 
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> Basha,
>>> >> >>> Pagination is defined well here [1].
>>> >> >>>
>>> >> >>> The next_url and prev_url fields give your client HTTP URIs to move
>>> >> >>> forward and backward through the result set. You can use them to
>>> >> >>> page
>>> >> >>> through search results.
>>> >> >>>
>>> >> >>> I have some work to do on the search docs and I'll add field
>>> >> >>> definitions then as well.
>>> >> >>>
>>> >> >>> 1. http://en.wikipedia.org/wiki/Pagination_(web)
>>> >> >>>
>>> >> >>> Doug Williams
>>> >> >>> Twitter API Support
>>> >> >>> http://twitter.com/dougw
>>> >> >>>
>>> >> >>>
>>> >> >>>
>>> >> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
>>> >> >>> 
>>> >> >>> wrote:
>>> >> >>> > Hi matt,
>>> >> >>> >
>>> >> >>> > Thank You
>>> >> >>> > What is Pagination? Does it mean that I cannot use max_id for
>>> >> >>> > searching
>>> >> >>> > tweets. What does next_url and prev_url fields mean. I did not
>>> >> >>> > find
>>> >> >>> > next_url
>>> >> >>> > and prev_url in documentation. how can these two urls be used
>>> >> >>> > with
>>> >> >>> > max_id.
>>> >> >>> > Please explain with example if possible.
>>> >> >>> >
>>> >> >>> >
>>> >> >>> >
>>> >> >>> > Regards,
>>> >> >>> >
>>> >> >>> > Mahaboob Basha Shaik
>>> >> >>> > www.netelixir.com
>>> >> >>> > Making Search Work
>>> >> >>> >
>>> >> >>> >
>>> >> >>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
>>> >> >>> > wrote:
>>> >> >>> >>
>>> >> >>> >> Hi Basha,
>>> >> >>> >>     The max_id 

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Basha Shaik
Hi Doug,
you said we can use next_url and prev URL.

I tried to get next_url. the response is saying that there is no field
called next_url. Should i pass next _url in the request with max_id? if so
how can i know what next_url is?

Can u give an clear example how to use prev_url and next_url

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Fri, Apr 3, 2009 at 6:57 PM, Doug Williams  wrote:

>
> The use of prev_url and next_url will take care of step 1 from your
> flow described above. Specifically, next_url will give your
> application the URI to contact to get the next page of results.
>
> Combining max_id and next_url usage will not solve the duplicate
> problem. To overcome that issue, you will have to simply strip the
> duplicate tweets on the client-side.
>
> Thanks,
> Doug Williams
> Twitter API Support
> http://twitter.com/dougw
>
>
>
> On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik 
> wrote:
> > HI,
> >
> > Can you give me an example how i can use prev_url and next_url with
> max_id.
> >
> >
> >
> > No I am following below process to search
> > 1. Set rpp=100 and retrieve 15 pages search results by incrementing
> > the param 'page'
> > 2. Get the id of the last status on page 15 and set that as the max_id
> > for the next query
> > 3. If we have more results, go to step 1
> >
> > here i got duplicate. 100th record in page 1 was same as 1st record in
> page
> > 2.
> >
> > I understood the reason why i got the duplicates from matts previous
> mail.
> >
> > Will this problem solve if i use max_id with prev_url and next_url?
> >  How can the duplicate problem be solved
> >
> >
> > Regards,
> >
> > Mahaboob Basha Shaik
> > www.netelixir.com
> > Making Search Work
> >
> >
> > On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:
> >>
> >> Basha,
> >> Pagination is defined well here [1].
> >>
> >> The next_url and prev_url fields give your client HTTP URIs to move
> >> forward and backward through the result set. You can use them to page
> >> through search results.
> >>
> >> I have some work to do on the search docs and I'll add field
> >> definitions then as well.
> >>
> >> 1. 
> >> http://en.wikipedia.org/wiki/Pagination_(web)
> >>
> >> Doug Williams
> >> Twitter API Support
> >> http://twitter.com/dougw
> >>
> >>
> >>
> >> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik  >
> >> wrote:
> >> > Hi matt,
> >> >
> >> > Thank You
> >> > What is Pagination? Does it mean that I cannot use max_id for
> searching
> >> > tweets. What does next_url and prev_url fields mean. I did not find
> >> > next_url
> >> > and prev_url in documentation. how can these two urls be used with
> >> > max_id.
> >> > Please explain with example if possible.
> >> >
> >> >
> >> >
> >> > Regards,
> >> >
> >> > Mahaboob Basha Shaik
> >> > www.netelixir.com
> >> > Making Search Work
> >> >
> >> >
> >> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
> wrote:
> >> >>
> >> >> Hi Basha,
> >> >> The max_id is only intended to be used for pagination via the
> >> >> next_url
> >> >> and prev_url fields and is known not to work with since_id. It is not
> >> >> documented as a valid parameter because it's known to only work in
> the
> >> >> case
> >> >> it was designed for. We added the max_id to prevent the problem where
> >> >> you
> >> >> click on 'Next' and page two starts with duplicates. Here's the
> >> >> scenario:
> >> >>  1. Let's say you search for 'foo'.
> >> >>  2. You wait 10 seconds, during which 5 people send tweets containing
> >> >> 'foo'.
> >> >>  3. You click next and go to page=2 (or call page=2 via the API)
> >> >>3.a. If we displayed results 21-40 the first 5 results would look
> >> >> like
> >> >> duplicates because they were "pushed down" by the 5 new entries.
> >> >>3.b. If we append a max_id from the time you searched we can do
> and
> >> >> offset from the maximum and the new 5 entries are skipped.
> >> >>   We use option 3.b. (as does twitter.com now) so you don't see
> >> >> duplicates. Since we wanted to provide the same data in the API as
> the
> >> >> UI we
> >> >> added the next_url and prev_url members in our output.
> >> >> Thanks;
> >> >>   — Matt Sanford
> >> >> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
> >> >>
> >> >> HI Matt,
> >> >>
> >> >> when Since_id and Max_id are given together, max_id is not working.
> >> >> This
> >> >> query is ignoring max_id. But with only since _id its working fine.
> Is
> >> >> there
> >> >> any problem when max_id and since_id are used together.
> >> >>
> >> >> Also please tell me what does max_id exactly mean and also what does
> it
> >> >> return when we send a request.
> >> >> Also tell me what the total returns.
> >> >>
> >> >>
> >> >> Regards,
> >> >>
> >> >> Mahaboob Basha Shaik
> >> >> www.netelixir.com
> >> >> Making Search Work
> >> >>
> >> >>
> >> >> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford 
> wrote:
> >> >>>
> >> >>> Hi there,
> >> >>>
> >> >>>Can you provide an exam

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Chad Etzel

Assuming you get the json data somehow and store it in a variable
called "jdata", you can construct the next page url thus:

var next_page_url = "http://search.twitter.com/"; + jdata.next_page;

-Chad

On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik  wrote:
> I am using json
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel  wrote:
>>
>> Are you using the .atom or .json API feed?  I am only familiar with
>> the .json feed.
>> -Chad
>>
>> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik 
>> wrote:
>> > Hi Chad,
>> >
>> > how can we use "next_page" in the url we request. where can we get the
>> > url
>> > we need to pass.
>> >
>> > Regards,
>> >
>> > Mahaboob Basha Shaik
>> > www.netelixir.com
>> > Making Search Work
>> >
>> >
>> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel  wrote:
>> >>
>> >> I'm not sure of these "next_url" and "prev_url" fields (never seen
>> >> them anywhere), but at least in the json data there is a "next_page"
>> >> field which uses "?page=_&max_id=__" already prefilled for you.
>> >> This should definitely avoid the duplicate tweet issue.  I've never
>> >> had to do any client-side duplicate filtering when using the correct
>> >> combination of "page","max_id", and "rpp" values...
>> >>
>> >> If you give very specific examples (the actual URL data would be
>> >> handy) where you are seeing duplicates between pages, we can probably
>> >> help sort this out.
>> >>
>> >> -Chad
>> >>
>> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
>> >> >
>> >> > The use of prev_url and next_url will take care of step 1 from your
>> >> > flow described above. Specifically, next_url will give your
>> >> > application the URI to contact to get the next page of results.
>> >> >
>> >> > Combining max_id and next_url usage will not solve the duplicate
>> >> > problem. To overcome that issue, you will have to simply strip the
>> >> > duplicate tweets on the client-side.
>> >> >
>> >> > Thanks,
>> >> > Doug Williams
>> >> > Twitter API Support
>> >> > http://twitter.com/dougw
>> >> >
>> >> >
>> >> >
>> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
>> >> > 
>> >> > wrote:
>> >> >> HI,
>> >> >>
>> >> >> Can you give me an example how i can use prev_url and next_url with
>> >> >> max_id.
>> >> >>
>> >> >>
>> >> >>
>> >> >> No I am following below process to search
>> >> >> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
>> >> >> the param 'page'
>> >> >> 2. Get the id of the last status on page 15 and set that as the
>> >> >> max_id
>> >> >> for the next query
>> >> >> 3. If we have more results, go to step 1
>> >> >>
>> >> >> here i got duplicate. 100th record in page 1 was same as 1st record
>> >> >> in
>> >> >> page
>> >> >> 2.
>> >> >>
>> >> >> I understood the reason why i got the duplicates from matts previous
>> >> >> mail.
>> >> >>
>> >> >> Will this problem solve if i use max_id with prev_url and next_url?
>> >> >>  How can the duplicate problem be solved
>> >> >>
>> >> >>
>> >> >> Regards,
>> >> >>
>> >> >> Mahaboob Basha Shaik
>> >> >> www.netelixir.com
>> >> >> Making Search Work
>> >> >>
>> >> >>
>> >> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams 
>> >> >> wrote:
>> >> >>>
>> >> >>> Basha,
>> >> >>> Pagination is defined well here [1].
>> >> >>>
>> >> >>> The next_url and prev_url fields give your client HTTP URIs to move
>> >> >>> forward and backward through the result set. You can use them to
>> >> >>> page
>> >> >>> through search results.
>> >> >>>
>> >> >>> I have some work to do on the search docs and I'll add field
>> >> >>> definitions then as well.
>> >> >>>
>> >> >>> 1. http://en.wikipedia.org/wiki/Pagination_(web)
>> >> >>>
>> >> >>> Doug Williams
>> >> >>> Twitter API Support
>> >> >>> http://twitter.com/dougw
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
>> >> >>> 
>> >> >>> wrote:
>> >> >>> > Hi matt,
>> >> >>> >
>> >> >>> > Thank You
>> >> >>> > What is Pagination? Does it mean that I cannot use max_id for
>> >> >>> > searching
>> >> >>> > tweets. What does next_url and prev_url fields mean. I did not
>> >> >>> > find
>> >> >>> > next_url
>> >> >>> > and prev_url in documentation. how can these two urls be used
>> >> >>> > with
>> >> >>> > max_id.
>> >> >>> > Please explain with example if possible.
>> >> >>> >
>> >> >>> >
>> >> >>> >
>> >> >>> > Regards,
>> >> >>> >
>> >> >>> > Mahaboob Basha Shaik
>> >> >>> > www.netelixir.com
>> >> >>> > Making Search Work
>> >> >>> >
>> >> >>> >
>> >> >>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
>> >> >>> > wrote:
>> >> >>> >>
>> >> >>> >> Hi Basha,
>> >> >>> >>     The max_id is only intended to be used for pagination via
>> >> >>> >> the
>> >> >>> >> next_url
>> >> >>> >> and prev_url fields and is known not to work with since_id. It
>> >> >>> >> is
>> >> >>> >> not
>> >> >>> >> documented as a valid parameter because it's known to only work
>> >> >>> >> in
>> >> >>> >> the
>> >> >>> >> cas

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Basha Shaik
I am using json

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel  wrote:

>
> Are you using the .atom or .json API feed?  I am only familiar with
> the .json feed.
> -Chad
>
> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik 
> wrote:
> > Hi Chad,
> >
> > how can we use "next_page" in the url we request. where can we get the
> url
> > we need to pass.
> >
> > Regards,
> >
> > Mahaboob Basha Shaik
> > www.netelixir.com
> > Making Search Work
> >
> >
> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel  wrote:
> >>
> >> I'm not sure of these "next_url" and "prev_url" fields (never seen
> >> them anywhere), but at least in the json data there is a "next_page"
> >> field which uses "?page=_&max_id=__" already prefilled for you.
> >> This should definitely avoid the duplicate tweet issue.  I've never
> >> had to do any client-side duplicate filtering when using the correct
> >> combination of "page","max_id", and "rpp" values...
> >>
> >> If you give very specific examples (the actual URL data would be
> >> handy) where you are seeing duplicates between pages, we can probably
> >> help sort this out.
> >>
> >> -Chad
> >>
> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
> >> >
> >> > The use of prev_url and next_url will take care of step 1 from your
> >> > flow described above. Specifically, next_url will give your
> >> > application the URI to contact to get the next page of results.
> >> >
> >> > Combining max_id and next_url usage will not solve the duplicate
> >> > problem. To overcome that issue, you will have to simply strip the
> >> > duplicate tweets on the client-side.
> >> >
> >> > Thanks,
> >> > Doug Williams
> >> > Twitter API Support
> >> > http://twitter.com/dougw
> >> >
> >> >
> >> >
> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik <
> basha.neteli...@gmail.com>
> >> > wrote:
> >> >> HI,
> >> >>
> >> >> Can you give me an example how i can use prev_url and next_url with
> >> >> max_id.
> >> >>
> >> >>
> >> >>
> >> >> No I am following below process to search
> >> >> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
> >> >> the param 'page'
> >> >> 2. Get the id of the last status on page 15 and set that as the
> max_id
> >> >> for the next query
> >> >> 3. If we have more results, go to step 1
> >> >>
> >> >> here i got duplicate. 100th record in page 1 was same as 1st record
> in
> >> >> page
> >> >> 2.
> >> >>
> >> >> I understood the reason why i got the duplicates from matts previous
> >> >> mail.
> >> >>
> >> >> Will this problem solve if i use max_id with prev_url and next_url?
> >> >>  How can the duplicate problem be solved
> >> >>
> >> >>
> >> >> Regards,
> >> >>
> >> >> Mahaboob Basha Shaik
> >> >> www.netelixir.com
> >> >> Making Search Work
> >> >>
> >> >>
> >> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams 
> wrote:
> >> >>>
> >> >>> Basha,
> >> >>> Pagination is defined well here [1].
> >> >>>
> >> >>> The next_url and prev_url fields give your client HTTP URIs to move
> >> >>> forward and backward through the result set. You can use them to
> page
> >> >>> through search results.
> >> >>>
> >> >>> I have some work to do on the search docs and I'll add field
> >> >>> definitions then as well.
> >> >>>
> >> >>> 1. 
> >> >>> http://en.wikipedia.org/wiki/Pagination_(web)
> >> >>>
> >> >>> Doug Williams
> >> >>> Twitter API Support
> >> >>> http://twitter.com/dougw
> >> >>>
> >> >>>
> >> >>>
> >> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
> >> >>> 
> >> >>> wrote:
> >> >>> > Hi matt,
> >> >>> >
> >> >>> > Thank You
> >> >>> > What is Pagination? Does it mean that I cannot use max_id for
> >> >>> > searching
> >> >>> > tweets. What does next_url and prev_url fields mean. I did not
> find
> >> >>> > next_url
> >> >>> > and prev_url in documentation. how can these two urls be used with
> >> >>> > max_id.
> >> >>> > Please explain with example if possible.
> >> >>> >
> >> >>> >
> >> >>> >
> >> >>> > Regards,
> >> >>> >
> >> >>> > Mahaboob Basha Shaik
> >> >>> > www.netelixir.com
> >> >>> > Making Search Work
> >> >>> >
> >> >>> >
> >> >>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
> >> >>> > wrote:
> >> >>> >>
> >> >>> >> Hi Basha,
> >> >>> >> The max_id is only intended to be used for pagination via the
> >> >>> >> next_url
> >> >>> >> and prev_url fields and is known not to work with since_id. It is
> >> >>> >> not
> >> >>> >> documented as a valid parameter because it's known to only work
> in
> >> >>> >> the
> >> >>> >> case
> >> >>> >> it was designed for. We added the max_id to prevent the problem
> >> >>> >> where
> >> >>> >> you
> >> >>> >> click on 'Next' and page two starts with duplicates. Here's the
> >> >>> >> scenario:
> >> >>> >>  1. Let's say you search for 'foo'.
> >> >>> >>  2. You wait 10 seconds, during which 5 people send tweets
> >> >>> >> containing
> >> >>> >> 'foo'.
> >> >>> >>  3. You click next and go to page=

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Chad Etzel

Are you using the .atom or .json API feed?  I am only familiar with
the .json feed.
-Chad

On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik  wrote:
> Hi Chad,
>
> how can we use "next_page" in the url we request. where can we get the url
> we need to pass.
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel  wrote:
>>
>> I'm not sure of these "next_url" and "prev_url" fields (never seen
>> them anywhere), but at least in the json data there is a "next_page"
>> field which uses "?page=_&max_id=__" already prefilled for you.
>> This should definitely avoid the duplicate tweet issue.  I've never
>> had to do any client-side duplicate filtering when using the correct
>> combination of "page","max_id", and "rpp" values...
>>
>> If you give very specific examples (the actual URL data would be
>> handy) where you are seeing duplicates between pages, we can probably
>> help sort this out.
>>
>> -Chad
>>
>> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
>> >
>> > The use of prev_url and next_url will take care of step 1 from your
>> > flow described above. Specifically, next_url will give your
>> > application the URI to contact to get the next page of results.
>> >
>> > Combining max_id and next_url usage will not solve the duplicate
>> > problem. To overcome that issue, you will have to simply strip the
>> > duplicate tweets on the client-side.
>> >
>> > Thanks,
>> > Doug Williams
>> > Twitter API Support
>> > http://twitter.com/dougw
>> >
>> >
>> >
>> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik 
>> > wrote:
>> >> HI,
>> >>
>> >> Can you give me an example how i can use prev_url and next_url with
>> >> max_id.
>> >>
>> >>
>> >>
>> >> No I am following below process to search
>> >> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
>> >> the param 'page'
>> >> 2. Get the id of the last status on page 15 and set that as the max_id
>> >> for the next query
>> >> 3. If we have more results, go to step 1
>> >>
>> >> here i got duplicate. 100th record in page 1 was same as 1st record in
>> >> page
>> >> 2.
>> >>
>> >> I understood the reason why i got the duplicates from matts previous
>> >> mail.
>> >>
>> >> Will this problem solve if i use max_id with prev_url and next_url?
>> >>  How can the duplicate problem be solved
>> >>
>> >>
>> >> Regards,
>> >>
>> >> Mahaboob Basha Shaik
>> >> www.netelixir.com
>> >> Making Search Work
>> >>
>> >>
>> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:
>> >>>
>> >>> Basha,
>> >>> Pagination is defined well here [1].
>> >>>
>> >>> The next_url and prev_url fields give your client HTTP URIs to move
>> >>> forward and backward through the result set. You can use them to page
>> >>> through search results.
>> >>>
>> >>> I have some work to do on the search docs and I'll add field
>> >>> definitions then as well.
>> >>>
>> >>> 1. http://en.wikipedia.org/wiki/Pagination_(web)
>> >>>
>> >>> Doug Williams
>> >>> Twitter API Support
>> >>> http://twitter.com/dougw
>> >>>
>> >>>
>> >>>
>> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
>> >>> 
>> >>> wrote:
>> >>> > Hi matt,
>> >>> >
>> >>> > Thank You
>> >>> > What is Pagination? Does it mean that I cannot use max_id for
>> >>> > searching
>> >>> > tweets. What does next_url and prev_url fields mean. I did not find
>> >>> > next_url
>> >>> > and prev_url in documentation. how can these two urls be used with
>> >>> > max_id.
>> >>> > Please explain with example if possible.
>> >>> >
>> >>> >
>> >>> >
>> >>> > Regards,
>> >>> >
>> >>> > Mahaboob Basha Shaik
>> >>> > www.netelixir.com
>> >>> > Making Search Work
>> >>> >
>> >>> >
>> >>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
>> >>> > wrote:
>> >>> >>
>> >>> >> Hi Basha,
>> >>> >>     The max_id is only intended to be used for pagination via the
>> >>> >> next_url
>> >>> >> and prev_url fields and is known not to work with since_id. It is
>> >>> >> not
>> >>> >> documented as a valid parameter because it's known to only work in
>> >>> >> the
>> >>> >> case
>> >>> >> it was designed for. We added the max_id to prevent the problem
>> >>> >> where
>> >>> >> you
>> >>> >> click on 'Next' and page two starts with duplicates. Here's the
>> >>> >> scenario:
>> >>> >>  1. Let's say you search for 'foo'.
>> >>> >>  2. You wait 10 seconds, during which 5 people send tweets
>> >>> >> containing
>> >>> >> 'foo'.
>> >>> >>  3. You click next and go to page=2 (or call page=2 via the API)
>> >>> >>    3.a. If we displayed results 21-40 the first 5 results would
>> >>> >> look
>> >>> >> like
>> >>> >> duplicates because they were "pushed down" by the 5 new entries.
>> >>> >>    3.b. If we append a max_id from the time you searched we can do
>> >>> >> and
>> >>> >> offset from the maximum and the new 5 entries are skipped.
>> >>> >>   We use option 3.b. (as does twitter.com now) so you don't see
>> >>> >> duplicates. Since we wanted to provide the same data in the API as
>> >>> >> the
>> >>

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Basha Shaik
Hi Chad,

how can we use "next_page" in the url we request. where can we get the url
we need to pass.

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel  wrote:

>
> I'm not sure of these "next_url" and "prev_url" fields (never seen
> them anywhere), but at least in the json data there is a "next_page"
> field which uses "?page=_&max_id=__" already prefilled for you.
> This should definitely avoid the duplicate tweet issue.  I've never
> had to do any client-side duplicate filtering when using the correct
> combination of "page","max_id", and "rpp" values...
>
> If you give very specific examples (the actual URL data would be
> handy) where you are seeing duplicates between pages, we can probably
> help sort this out.
>
> -Chad
>
> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
> >
> > The use of prev_url and next_url will take care of step 1 from your
> > flow described above. Specifically, next_url will give your
> > application the URI to contact to get the next page of results.
> >
> > Combining max_id and next_url usage will not solve the duplicate
> > problem. To overcome that issue, you will have to simply strip the
> > duplicate tweets on the client-side.
> >
> > Thanks,
> > Doug Williams
> > Twitter API Support
> > http://twitter.com/dougw
> >
> >
> >
> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik 
> wrote:
> >> HI,
> >>
> >> Can you give me an example how i can use prev_url and next_url with
> max_id.
> >>
> >>
> >>
> >> No I am following below process to search
> >> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
> >> the param 'page'
> >> 2. Get the id of the last status on page 15 and set that as the max_id
> >> for the next query
> >> 3. If we have more results, go to step 1
> >>
> >> here i got duplicate. 100th record in page 1 was same as 1st record in
> page
> >> 2.
> >>
> >> I understood the reason why i got the duplicates from matts previous
> mail.
> >>
> >> Will this problem solve if i use max_id with prev_url and next_url?
> >>  How can the duplicate problem be solved
> >>
> >>
> >> Regards,
> >>
> >> Mahaboob Basha Shaik
> >> www.netelixir.com
> >> Making Search Work
> >>
> >>
> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:
> >>>
> >>> Basha,
> >>> Pagination is defined well here [1].
> >>>
> >>> The next_url and prev_url fields give your client HTTP URIs to move
> >>> forward and backward through the result set. You can use them to page
> >>> through search results.
> >>>
> >>> I have some work to do on the search docs and I'll add field
> >>> definitions then as well.
> >>>
> >>> 1. 
> >>> http://en.wikipedia.org/wiki/Pagination_(web)
> >>>
> >>> Doug Williams
> >>> Twitter API Support
> >>> http://twitter.com/dougw
> >>>
> >>>
> >>>
> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik <
> basha.neteli...@gmail.com>
> >>> wrote:
> >>> > Hi matt,
> >>> >
> >>> > Thank You
> >>> > What is Pagination? Does it mean that I cannot use max_id for
> searching
> >>> > tweets. What does next_url and prev_url fields mean. I did not find
> >>> > next_url
> >>> > and prev_url in documentation. how can these two urls be used with
> >>> > max_id.
> >>> > Please explain with example if possible.
> >>> >
> >>> >
> >>> >
> >>> > Regards,
> >>> >
> >>> > Mahaboob Basha Shaik
> >>> > www.netelixir.com
> >>> > Making Search Work
> >>> >
> >>> >
> >>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
> wrote:
> >>> >>
> >>> >> Hi Basha,
> >>> >> The max_id is only intended to be used for pagination via the
> >>> >> next_url
> >>> >> and prev_url fields and is known not to work with since_id. It is
> not
> >>> >> documented as a valid parameter because it's known to only work in
> the
> >>> >> case
> >>> >> it was designed for. We added the max_id to prevent the problem
> where
> >>> >> you
> >>> >> click on 'Next' and page two starts with duplicates. Here's the
> >>> >> scenario:
> >>> >>  1. Let's say you search for 'foo'.
> >>> >>  2. You wait 10 seconds, during which 5 people send tweets
> containing
> >>> >> 'foo'.
> >>> >>  3. You click next and go to page=2 (or call page=2 via the API)
> >>> >>3.a. If we displayed results 21-40 the first 5 results would look
> >>> >> like
> >>> >> duplicates because they were "pushed down" by the 5 new entries.
> >>> >>3.b. If we append a max_id from the time you searched we can do
> and
> >>> >> offset from the maximum and the new 5 entries are skipped.
> >>> >>   We use option 3.b. (as does twitter.com now) so you don't see
> >>> >> duplicates. Since we wanted to provide the same data in the API as
> the
> >>> >> UI we
> >>> >> added the next_url and prev_url members in our output.
> >>> >> Thanks;
> >>> >>   — Matt Sanford
> >>> >> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
> >>> >>
> >>> >> HI Matt,
> >>> >>
> >>> >> when Since_id and Max_id are given together, max_id is not working.
> >>> >

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Chad Etzel

I'm not sure of these "next_url" and "prev_url" fields (never seen
them anywhere), but at least in the json data there is a "next_page"
field which uses "?page=_&max_id=__" already prefilled for you.
This should definitely avoid the duplicate tweet issue.  I've never
had to do any client-side duplicate filtering when using the correct
combination of "page","max_id", and "rpp" values...

If you give very specific examples (the actual URL data would be
handy) where you are seeing duplicates between pages, we can probably
help sort this out.

-Chad

On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
>
> The use of prev_url and next_url will take care of step 1 from your
> flow described above. Specifically, next_url will give your
> application the URI to contact to get the next page of results.
>
> Combining max_id and next_url usage will not solve the duplicate
> problem. To overcome that issue, you will have to simply strip the
> duplicate tweets on the client-side.
>
> Thanks,
> Doug Williams
> Twitter API Support
> http://twitter.com/dougw
>
>
>
> On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik  
> wrote:
>> HI,
>>
>> Can you give me an example how i can use prev_url and next_url with max_id.
>>
>>
>>
>> No I am following below process to search
>> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
>> the param 'page'
>> 2. Get the id of the last status on page 15 and set that as the max_id
>> for the next query
>> 3. If we have more results, go to step 1
>>
>> here i got duplicate. 100th record in page 1 was same as 1st record in page
>> 2.
>>
>> I understood the reason why i got the duplicates from matts previous mail.
>>
>> Will this problem solve if i use max_id with prev_url and next_url?
>>  How can the duplicate problem be solved
>>
>>
>> Regards,
>>
>> Mahaboob Basha Shaik
>> www.netelixir.com
>> Making Search Work
>>
>>
>> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:
>>>
>>> Basha,
>>> Pagination is defined well here [1].
>>>
>>> The next_url and prev_url fields give your client HTTP URIs to move
>>> forward and backward through the result set. You can use them to page
>>> through search results.
>>>
>>> I have some work to do on the search docs and I'll add field
>>> definitions then as well.
>>>
>>> 1. http://en.wikipedia.org/wiki/Pagination_(web)
>>>
>>> Doug Williams
>>> Twitter API Support
>>> http://twitter.com/dougw
>>>
>>>
>>>
>>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik 
>>> wrote:
>>> > Hi matt,
>>> >
>>> > Thank You
>>> > What is Pagination? Does it mean that I cannot use max_id for searching
>>> > tweets. What does next_url and prev_url fields mean. I did not find
>>> > next_url
>>> > and prev_url in documentation. how can these two urls be used with
>>> > max_id.
>>> > Please explain with example if possible.
>>> >
>>> >
>>> >
>>> > Regards,
>>> >
>>> > Mahaboob Basha Shaik
>>> > www.netelixir.com
>>> > Making Search Work
>>> >
>>> >
>>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford  wrote:
>>> >>
>>> >> Hi Basha,
>>> >> The max_id is only intended to be used for pagination via the
>>> >> next_url
>>> >> and prev_url fields and is known not to work with since_id. It is not
>>> >> documented as a valid parameter because it's known to only work in the
>>> >> case
>>> >> it was designed for. We added the max_id to prevent the problem where
>>> >> you
>>> >> click on 'Next' and page two starts with duplicates. Here's the
>>> >> scenario:
>>> >>  1. Let's say you search for 'foo'.
>>> >>  2. You wait 10 seconds, during which 5 people send tweets containing
>>> >> 'foo'.
>>> >>  3. You click next and go to page=2 (or call page=2 via the API)
>>> >>3.a. If we displayed results 21-40 the first 5 results would look
>>> >> like
>>> >> duplicates because they were "pushed down" by the 5 new entries.
>>> >>3.b. If we append a max_id from the time you searched we can do and
>>> >> offset from the maximum and the new 5 entries are skipped.
>>> >>   We use option 3.b. (as does twitter.com now) so you don't see
>>> >> duplicates. Since we wanted to provide the same data in the API as the
>>> >> UI we
>>> >> added the next_url and prev_url members in our output.
>>> >> Thanks;
>>> >>   — Matt Sanford
>>> >> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
>>> >>
>>> >> HI Matt,
>>> >>
>>> >> when Since_id and Max_id are given together, max_id is not working.
>>> >> This
>>> >> query is ignoring max_id. But with only since _id its working fine. Is
>>> >> there
>>> >> any problem when max_id and since_id are used together.
>>> >>
>>> >> Also please tell me what does max_id exactly mean and also what does it
>>> >> return when we send a request.
>>> >> Also tell me what the total returns.
>>> >>
>>> >>
>>> >> Regards,
>>> >>
>>> >> Mahaboob Basha Shaik
>>> >> www.netelixir.com
>>> >> Making Search Work
>>> >>
>>> >>
>>> >> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford  wrote:
>>> >>>
>>> >>> Hi there,
>>> >>>
>>> >>>Can you provide an example URL wher

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Doug Williams

The use of prev_url and next_url will take care of step 1 from your
flow described above. Specifically, next_url will give your
application the URI to contact to get the next page of results.

Combining max_id and next_url usage will not solve the duplicate
problem. To overcome that issue, you will have to simply strip the
duplicate tweets on the client-side.

Thanks,
Doug Williams
Twitter API Support
http://twitter.com/dougw



On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik  wrote:
> HI,
>
> Can you give me an example how i can use prev_url and next_url with max_id.
>
>
>
> No I am following below process to search
> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
> the param 'page'
> 2. Get the id of the last status on page 15 and set that as the max_id
> for the next query
> 3. If we have more results, go to step 1
>
> here i got duplicate. 100th record in page 1 was same as 1st record in page
> 2.
>
> I understood the reason why i got the duplicates from matts previous mail.
>
> Will this problem solve if i use max_id with prev_url and next_url?
>  How can the duplicate problem be solved
>
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:
>>
>> Basha,
>> Pagination is defined well here [1].
>>
>> The next_url and prev_url fields give your client HTTP URIs to move
>> forward and backward through the result set. You can use them to page
>> through search results.
>>
>> I have some work to do on the search docs and I'll add field
>> definitions then as well.
>>
>> 1. http://en.wikipedia.org/wiki/Pagination_(web)
>>
>> Doug Williams
>> Twitter API Support
>> http://twitter.com/dougw
>>
>>
>>
>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik 
>> wrote:
>> > Hi matt,
>> >
>> > Thank You
>> > What is Pagination? Does it mean that I cannot use max_id for searching
>> > tweets. What does next_url and prev_url fields mean. I did not find
>> > next_url
>> > and prev_url in documentation. how can these two urls be used with
>> > max_id.
>> > Please explain with example if possible.
>> >
>> >
>> >
>> > Regards,
>> >
>> > Mahaboob Basha Shaik
>> > www.netelixir.com
>> > Making Search Work
>> >
>> >
>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford  wrote:
>> >>
>> >> Hi Basha,
>> >>     The max_id is only intended to be used for pagination via the
>> >> next_url
>> >> and prev_url fields and is known not to work with since_id. It is not
>> >> documented as a valid parameter because it's known to only work in the
>> >> case
>> >> it was designed for. We added the max_id to prevent the problem where
>> >> you
>> >> click on 'Next' and page two starts with duplicates. Here's the
>> >> scenario:
>> >>  1. Let's say you search for 'foo'.
>> >>  2. You wait 10 seconds, during which 5 people send tweets containing
>> >> 'foo'.
>> >>  3. You click next and go to page=2 (or call page=2 via the API)
>> >>    3.a. If we displayed results 21-40 the first 5 results would look
>> >> like
>> >> duplicates because they were "pushed down" by the 5 new entries.
>> >>    3.b. If we append a max_id from the time you searched we can do and
>> >> offset from the maximum and the new 5 entries are skipped.
>> >>   We use option 3.b. (as does twitter.com now) so you don't see
>> >> duplicates. Since we wanted to provide the same data in the API as the
>> >> UI we
>> >> added the next_url and prev_url members in our output.
>> >> Thanks;
>> >>   — Matt Sanford
>> >> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
>> >>
>> >> HI Matt,
>> >>
>> >> when Since_id and Max_id are given together, max_id is not working.
>> >> This
>> >> query is ignoring max_id. But with only since _id its working fine. Is
>> >> there
>> >> any problem when max_id and since_id are used together.
>> >>
>> >> Also please tell me what does max_id exactly mean and also what does it
>> >> return when we send a request.
>> >> Also tell me what the total returns.
>> >>
>> >>
>> >> Regards,
>> >>
>> >> Mahaboob Basha Shaik
>> >> www.netelixir.com
>> >> Making Search Work
>> >>
>> >>
>> >> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford  wrote:
>> >>>
>> >>> Hi there,
>> >>>
>> >>>    Can you provide an example URL where since_id isn't working so I
>> >>> can
>> >>> try and reproduce the issue? As for language, the language identifier
>> >>> is not
>> >>> a 100% and sometimes makes mistakes. Hopefully not too many mistakes
>> >>> but it
>> >>> definitely does.
>> >>>
>> >>> Thanks;
>> >>>  — Matt Sanford / @mzsanford
>> >>>
>> >>> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
>> >>>
>> 
>>  Hi all;
>> 
>>  I see a few people complaining about the since_id not working.  I too
>>  have the same issue - I am currently storing the last executed id and
>>  having to check new tweets to make sure their id is greater than my
>>  last processed id as a temporary workaround.
>> 
>>  I have also noticed that the filter by language param also does

[twitter-dev] Re: Search queries not working

2009-04-02 Thread Basha Shaik
HI,

Can you give me an example how i can use prev_url and next_url with max_id.



No I am following below process to search
1. Set rpp=100 and retrieve 15 pages search results by incrementing
the param 'page'
2. Get the id of the last status on page 15 and set that as the max_id
for the next query
3. If we have more results, go to step 1

here i got duplicate. 100th record in page 1 was same as 1st record in page
2.

I understood the reason why i got the duplicates from matts previous mail.

Will this problem solve if i use max_id with prev_url and next_url?
 How can the duplicate problem be solved


Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:

>
> Basha,
> Pagination is defined well here [1].
>
> The next_url and prev_url fields give your client HTTP URIs to move
> forward and backward through the result set. You can use them to page
> through search results.
>
> I have some work to do on the search docs and I'll add field
> definitions then as well.
>
> 1. 
> http://en.wikipedia.org/wiki/Pagination_(web)
>
> Doug Williams
> Twitter API Support
> http://twitter.com/dougw
>
>
>
> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik 
> wrote:
> > Hi matt,
> >
> > Thank You
> > What is Pagination? Does it mean that I cannot use max_id for searching
> > tweets. What does next_url and prev_url fields mean. I did not find
> next_url
> > and prev_url in documentation. how can these two urls be used with
> max_id.
> > Please explain with example if possible.
> >
> >
> >
> > Regards,
> >
> > Mahaboob Basha Shaik
> > www.netelixir.com
> > Making Search Work
> >
> >
> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford  wrote:
> >>
> >> Hi Basha,
> >> The max_id is only intended to be used for pagination via the
> next_url
> >> and prev_url fields and is known not to work with since_id. It is not
> >> documented as a valid parameter because it's known to only work in the
> case
> >> it was designed for. We added the max_id to prevent the problem where
> you
> >> click on 'Next' and page two starts with duplicates. Here's the
> scenario:
> >>  1. Let's say you search for 'foo'.
> >>  2. You wait 10 seconds, during which 5 people send tweets containing
> >> 'foo'.
> >>  3. You click next and go to page=2 (or call page=2 via the API)
> >>3.a. If we displayed results 21-40 the first 5 results would look
> like
> >> duplicates because they were "pushed down" by the 5 new entries.
> >>3.b. If we append a max_id from the time you searched we can do and
> >> offset from the maximum and the new 5 entries are skipped.
> >>   We use option 3.b. (as does twitter.com now) so you don't see
> >> duplicates. Since we wanted to provide the same data in the API as the
> UI we
> >> added the next_url and prev_url members in our output.
> >> Thanks;
> >>   — Matt Sanford
> >> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
> >>
> >> HI Matt,
> >>
> >> when Since_id and Max_id are given together, max_id is not working. This
> >> query is ignoring max_id. But with only since _id its working fine. Is
> there
> >> any problem when max_id and since_id are used together.
> >>
> >> Also please tell me what does max_id exactly mean and also what does it
> >> return when we send a request.
> >> Also tell me what the total returns.
> >>
> >>
> >> Regards,
> >>
> >> Mahaboob Basha Shaik
> >> www.netelixir.com
> >> Making Search Work
> >>
> >>
> >> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford  wrote:
> >>>
> >>> Hi there,
> >>>
> >>>Can you provide an example URL where since_id isn't working so I can
> >>> try and reproduce the issue? As for language, the language identifier
> is not
> >>> a 100% and sometimes makes mistakes. Hopefully not too many mistakes
> but it
> >>> definitely does.
> >>>
> >>> Thanks;
> >>>  — Matt Sanford / @mzsanford
> >>>
> >>> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
> >>>
> 
>  Hi all;
> 
>  I see a few people complaining about the since_id not working.  I too
>  have the same issue - I am currently storing the last executed id and
>  having to check new tweets to make sure their id is greater than my
>  last processed id as a temporary workaround.
> 
>  I have also noticed that the filter by language param also doesn't
>  seem to be working 100% - I notice a few chinese tweets, as well as
>  tweets having a null value for language...
> 
> >>>
> >>
> >>
> >
> >
>


[twitter-dev] Re: Search queries not working

2009-04-02 Thread Doug Williams

Basha,
Pagination is defined well here [1].

The next_url and prev_url fields give your client HTTP URIs to move
forward and backward through the result set. You can use them to page
through search results.

I have some work to do on the search docs and I'll add field
definitions then as well.

1. http://en.wikipedia.org/wiki/Pagination_(web)

Doug Williams
Twitter API Support
http://twitter.com/dougw



On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik  wrote:
> Hi matt,
>
> Thank You
> What is Pagination? Does it mean that I cannot use max_id for searching
> tweets. What does next_url and prev_url fields mean. I did not find next_url
> and prev_url in documentation. how can these two urls be used with max_id.
> Please explain with example if possible.
>
>
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford  wrote:
>>
>> Hi Basha,
>>     The max_id is only intended to be used for pagination via the next_url
>> and prev_url fields and is known not to work with since_id. It is not
>> documented as a valid parameter because it's known to only work in the case
>> it was designed for. We added the max_id to prevent the problem where you
>> click on 'Next' and page two starts with duplicates. Here's the scenario:
>>  1. Let's say you search for 'foo'.
>>  2. You wait 10 seconds, during which 5 people send tweets containing
>> 'foo'.
>>  3. You click next and go to page=2 (or call page=2 via the API)
>>    3.a. If we displayed results 21-40 the first 5 results would look like
>> duplicates because they were "pushed down" by the 5 new entries.
>>    3.b. If we append a max_id from the time you searched we can do and
>> offset from the maximum and the new 5 entries are skipped.
>>   We use option 3.b. (as does twitter.com now) so you don't see
>> duplicates. Since we wanted to provide the same data in the API as the UI we
>> added the next_url and prev_url members in our output.
>> Thanks;
>>   — Matt Sanford
>> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
>>
>> HI Matt,
>>
>> when Since_id and Max_id are given together, max_id is not working. This
>> query is ignoring max_id. But with only since _id its working fine. Is there
>> any problem when max_id and since_id are used together.
>>
>> Also please tell me what does max_id exactly mean and also what does it
>> return when we send a request.
>> Also tell me what the total returns.
>>
>>
>> Regards,
>>
>> Mahaboob Basha Shaik
>> www.netelixir.com
>> Making Search Work
>>
>>
>> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford  wrote:
>>>
>>> Hi there,
>>>
>>>    Can you provide an example URL where since_id isn't working so I can
>>> try and reproduce the issue? As for language, the language identifier is not
>>> a 100% and sometimes makes mistakes. Hopefully not too many mistakes but it
>>> definitely does.
>>>
>>> Thanks;
>>>  — Matt Sanford / @mzsanford
>>>
>>> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
>>>

 Hi all;

 I see a few people complaining about the since_id not working.  I too
 have the same issue - I am currently storing the last executed id and
 having to check new tweets to make sure their id is greater than my
 last processed id as a temporary workaround.

 I have also noticed that the filter by language param also doesn't
 seem to be working 100% - I notice a few chinese tweets, as well as
 tweets having a null value for language...

>>>
>>
>>
>
>


[twitter-dev] Re: Search queries not working

2009-04-02 Thread Basha Shaik
Hi matt,

Thank You
What is Pagination? Does it mean that I cannot use max_id for searching
tweets. What does next_url and prev_url fields mean. I did not find next_url
and prev_url in documentation. how can these two urls be used with max_id.
Please explain with example if possible.



Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford  wrote:

> Hi Basha,
> The max_id is only intended to be used for pagination via the next_url
> and prev_url fields and is known not to work with since_id. It is not
> documented as a valid parameter because it's known to only work in the case
> it was designed for. We added the max_id to prevent the problem where you
> click on 'Next' and page two starts with duplicates. Here's the scenario:
>
>  1. Let's say you search for 'foo'.
>  2. You wait 10 seconds, during which 5 people send tweets containing
> 'foo'.
>  3. You click next and go to page=2 (or call page=2 via the API)
>3.a. If we displayed results 21-40 the first 5 results would look like
> duplicates because they were "pushed down" by the 5 new entries.
>3.b. If we append a max_id from the time you searched we can do and
> offset from the maximum and the new 5 entries are skipped.
>
>   We use option 3.b. (as does twitter.com now) so you don't see
> duplicates. Since we wanted to provide the same data in the API as the UI we
> added the next_url and prev_url members in our output.
>
> Thanks;
>   — Matt Sanford
>
> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
>
> HI Matt,
>
> when Since_id and Max_id are given together, max_id is not working. This
> query is ignoring max_id. But with only since _id its working fine. Is there
> any problem when max_id and since_id are used together.
>
> Also please tell me what does max_id exactly mean and also what does it
> return when we send a request.
> Also tell me what the total returns.
>
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford  wrote:
>
>>
>> Hi there,
>>
>>Can you provide an example URL where since_id isn't working so I can
>> try and reproduce the issue? As for language, the language identifier is not
>> a 100% and sometimes makes mistakes. Hopefully not too many mistakes but it
>> definitely does.
>>
>> Thanks;
>>  — Matt Sanford / @mzsanford
>>
>>
>> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
>>
>>
>>> Hi all;
>>>
>>> I see a few people complaining about the since_id not working.  I too
>>> have the same issue - I am currently storing the last executed id and
>>> having to check new tweets to make sure their id is greater than my
>>> last processed id as a temporary workaround.
>>>
>>> I have also noticed that the filter by language param also doesn't
>>> seem to be working 100% - I notice a few chinese tweets, as well as
>>> tweets having a null value for language...
>>>
>>>
>>
>
>


[twitter-dev] Re: Search queries not working

2009-04-02 Thread feedbackmine

Hi Matt,

I have tried to use language parameter of twitter search and find the
result is very unreliable. For example:
http://search.twitter.com/search?lang=all&q=tweetjobsearch returns 10
results (all in english), but
http://search.twitter.com/search?lang=en&q=tweetjobsearch only returns
3.

I googled this list and it seems you are using n-gram based algorithm
(http://groups.google.com/group/twitter-development-talk/msg/
565313d7b36e8d65). I have found n-gram algorithm works very well for
language detection, but the quality of training data may make a big
difference.

Recently I have developed a language detector (in ruby) myself:
http://github.com/feedbackmine/language_detector/tree/master
It uses wikipedia's data for training, and based on my limited
experience it works well. Actually using wikipedia's data is not my
idea, all credits should go to Kevin Burton (http://feedblog.org/
2005/08/19/ngram-language-categorization-source/ ).

Just thought you may be interested.

@feedbackmine
http://twitter.com/feedbackmine

On Mar 31, 11:22 am, Matt Sanford  wrote:
> Hi there,
>
>      Can you provide an example URL where since_id isn't working so I  
> can try and reproduce the issue? As forlanguage, thelanguage 
> identifier is not a 100% and sometimes makes mistakes. Hopefully not  
> too many mistakes but it definitely does.
>
> Thanks;
>    — Matt Sanford / @mzsanford
>
> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
>
>
>
>
>
> > Hi all;
>
> > I see a few people complaining about the since_id not working.  I too
> > have the same issue - I am currently storing the last executed id and
> > having to check new tweets to make sure their id is greater than my
> > last processed id as a temporary workaround.
>
> > I have also noticed that the filter bylanguageparam also doesn't
> > seem to be working 100% - I notice a few chinese tweets, as well as
> > tweets having a null value forlanguage...


[twitter-dev] Re: Search queries not working

2009-04-01 Thread Matt Sanford

Hi Basha,

The max_id is only intended to be used for pagination via the  
next_url and prev_url fields and is known not to work with since_id.  
It is not documented as a valid parameter because it's known to only  
work in the case it was designed for. We added the max_id to prevent  
the problem where you click on 'Next' and page two starts with  
duplicates. Here's the scenario:


 1. Let's say you search for 'foo'.
 2. You wait 10 seconds, during which 5 people send tweets containing  
'foo'.

 3. You click next and go to page=2 (or call page=2 via the API)
   3.a. If we displayed results 21-40 the first 5 results would look  
like duplicates because they were "pushed down" by the 5 new entries.
   3.b. If we append a max_id from the time you searched we can do  
and offset from the maximum and the new 5 entries are skipped.


  We use option 3.b. (as does twitter.com now) so you don't see  
duplicates. Since we wanted to provide the same data in the API as the  
UI we added the next_url and prev_url members in our output.


Thanks;
  — Matt Sanford

On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:


HI Matt,

when Since_id and Max_id are given together, max_id is not working.  
This query is ignoring max_id. But with only since _id its working  
fine. Is there any problem when max_id and since_id are used together.


Also please tell me what does max_id exactly mean and also what does  
it return when we send a request.

Also tell me what the total returns.


Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford   
wrote:


Hi there,

   Can you provide an example URL where since_id isn't working so I  
can try and reproduce the issue? As for language, the language  
identifier is not a 100% and sometimes makes mistakes. Hopefully not  
too many mistakes but it definitely does.


Thanks;
 — Matt Sanford / @mzsanford


On Mar 31, 2009, at 08:14 AM, codepuke wrote:


Hi all;

I see a few people complaining about the since_id not working.  I too
have the same issue - I am currently storing the last executed id and
having to check new tweets to make sure their id is greater than my
last processed id as a temporary workaround.

I have also noticed that the filter by language param also doesn't
seem to be working 100% - I notice a few chinese tweets, as well as
tweets having a null value for language...







[twitter-dev] Re: Search queries not working

2009-03-31 Thread Basha Shaik
HI Matt,

when Since_id and Max_id are given together, max_id is not working. This
query is ignoring max_id. But with only since _id its working fine. Is there
any problem when max_id and since_id are used together.

Also please tell me what does max_id exactly mean and also what does it
return when we send a request.
Also tell me what the total returns.


Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford  wrote:

>
> Hi there,
>
>Can you provide an example URL where since_id isn't working so I can try
> and reproduce the issue? As for language, the language identifier is not a
> 100% and sometimes makes mistakes. Hopefully not too many mistakes but it
> definitely does.
>
> Thanks;
>  — Matt Sanford / @mzsanford
>
>
> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
>
>
>> Hi all;
>>
>> I see a few people complaining about the since_id not working.  I too
>> have the same issue - I am currently storing the last executed id and
>> having to check new tweets to make sure their id is greater than my
>> last processed id as a temporary workaround.
>>
>> I have also noticed that the filter by language param also doesn't
>> seem to be working 100% - I notice a few chinese tweets, as well as
>> tweets having a null value for language...
>>
>>
>


[twitter-dev] Re: Search queries not working

2009-03-31 Thread Matt Sanford


Hi there,

Can you provide an example URL where since_id isn't working so I  
can try and reproduce the issue? As for language, the language  
identifier is not a 100% and sometimes makes mistakes. Hopefully not  
too many mistakes but it definitely does.


Thanks;
  — Matt Sanford / @mzsanford

On Mar 31, 2009, at 08:14 AM, codepuke wrote:



Hi all;

I see a few people complaining about the since_id not working.  I too
have the same issue - I am currently storing the last executed id and
having to check new tweets to make sure their id is greater than my
last processed id as a temporary workaround.

I have also noticed that the filter by language param also doesn't
seem to be working 100% - I notice a few chinese tweets, as well as
tweets having a null value for language...