My example was in javascript. How are you retrieving the json data?
What language are you using?
-chad

On Sat, Apr 4, 2009 at 2:35 AM, Basha Shaik <basha.neteli...@gmail.com> wrote:
> Hi Chad,
> how can we store all json data in a variable "jdata".
> Can you tell me how to do that?
> I am using java for jason processing
>
> Which technology are you using?
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Sat, Apr 4, 2009 at 6:23 AM, Chad Etzel <jazzyc...@gmail.com> wrote:
>>
>> Sorry, typo previously:
>>
>> var next_page_url = "http://search.twitter.com/search.json"; +
>> jdata.next_page;
>>
>> On Sat, Apr 4, 2009 at 2:18 AM, Chad Etzel <jazzyc...@gmail.com> wrote:
>> > Assuming you get the json data somehow and store it in a variable
>> > called "jdata", you can construct the next page url thus:
>> >
>> > var next_page_url = "http://search.twitter.com/"; + jdata.next_page;
>> >
>> > -Chad
>> >
>> > On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik <basha.neteli...@gmail.com>
>> > wrote:
>> >> I am using json
>> >>
>> >> Regards,
>> >>
>> >> Mahaboob Basha Shaik
>> >> www.netelixir.com
>> >> Making Search Work
>> >>
>> >>
>> >> On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel <jazzyc...@gmail.com> wrote:
>> >>>
>> >>> Are you using the .atom or .json API feed?  I am only familiar with
>> >>> the .json feed.
>> >>> -Chad
>> >>>
>> >>> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik
>> >>> <basha.neteli...@gmail.com>
>> >>> wrote:
>> >>> > Hi Chad,
>> >>> >
>> >>> > how can we use "next_page" in the url we request. where can we get
>> >>> > the
>> >>> > url
>> >>> > we need to pass.
>> >>> >
>> >>> > Regards,
>> >>> >
>> >>> > Mahaboob Basha Shaik
>> >>> > www.netelixir.com
>> >>> > Making Search Work
>> >>> >
>> >>> >
>> >>> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel <jazzyc...@gmail.com>
>> >>> > wrote:
>> >>> >>
>> >>> >> I'm not sure of these "next_url" and "prev_url" fields (never seen
>> >>> >> them anywhere), but at least in the json data there is a
>> >>> >> "next_page"
>> >>> >> field which uses "?page=_&max_id=______" already prefilled for you.
>> >>> >> This should definitely avoid the duplicate tweet issue.  I've never
>> >>> >> had to do any client-side duplicate filtering when using the
>> >>> >> correct
>> >>> >> combination of "page","max_id", and "rpp" values...
>> >>> >>
>> >>> >> If you give very specific examples (the actual URL data would be
>> >>> >> handy) where you are seeing duplicates between pages, we can
>> >>> >> probably
>> >>> >> help sort this out.
>> >>> >>
>> >>> >> -Chad
>> >>> >>
>> >>> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams <d...@twitter.com>
>> >>> >> wrote:
>> >>> >> >
>> >>> >> > The use of prev_url and next_url will take care of step 1 from
>> >>> >> > your
>> >>> >> > flow described above. Specifically, next_url will give your
>> >>> >> > application the URI to contact to get the next page of results.
>> >>> >> >
>> >>> >> > Combining max_id and next_url usage will not solve the duplicate
>> >>> >> > problem. To overcome that issue, you will have to simply strip
>> >>> >> > the
>> >>> >> > duplicate tweets on the client-side.
>> >>> >> >
>> >>> >> > Thanks,
>> >>> >> > Doug Williams
>> >>> >> > Twitter API Support
>> >>> >> > http://twitter.com/dougw
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
>> >>> >> > <basha.neteli...@gmail.com>
>> >>> >> > wrote:
>> >>> >> >> HI,
>> >>> >> >>
>> >>> >> >> Can you give me an example how i can use prev_url and next_url
>> >>> >> >> with
>> >>> >> >> max_id.
>> >>> >> >>
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> No I am following below process to search
>> >>> >> >> 1. Set rpp=100 and retrieve 15 pages search results by
>> >>> >> >> incrementing
>> >>> >> >> the param 'page'
>> >>> >> >> 2. Get the id of the last status on page 15 and set that as the
>> >>> >> >> max_id
>> >>> >> >> for the next query
>> >>> >> >> 3. If we have more results, go to step 1
>> >>> >> >>
>> >>> >> >> here i got duplicate. 100th record in page 1 was same as 1st
>> >>> >> >> record
>> >>> >> >> in
>> >>> >> >> page
>> >>> >> >> 2.
>> >>> >> >>
>> >>> >> >> I understood the reason why i got the duplicates from matts
>> >>> >> >> previous
>> >>> >> >> mail.
>> >>> >> >>
>> >>> >> >> Will this problem solve if i use max_id with prev_url and
>> >>> >> >> next_url?
>> >>> >> >>  How can the duplicate problem be solved
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> Regards,
>> >>> >> >>
>> >>> >> >> Mahaboob Basha Shaik
>> >>> >> >> www.netelixir.com
>> >>> >> >> Making Search Work
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams <d...@twitter.com>
>> >>> >> >> wrote:
>> >>> >> >>>
>> >>> >> >>> Basha,
>> >>> >> >>> Pagination is defined well here [1].
>> >>> >> >>>
>> >>> >> >>> The next_url and prev_url fields give your client HTTP URIs to
>> >>> >> >>> move
>> >>> >> >>> forward and backward through the result set. You can use them
>> >>> >> >>> to
>> >>> >> >>> page
>> >>> >> >>> through search results.
>> >>> >> >>>
>> >>> >> >>> I have some work to do on the search docs and I'll add field
>> >>> >> >>> definitions then as well.
>> >>> >> >>>
>> >>> >> >>> 1. http://en.wikipedia.org/wiki/Pagination_(web)
>> >>> >> >>>
>> >>> >> >>> Doug Williams
>> >>> >> >>> Twitter API Support
>> >>> >> >>> http://twitter.com/dougw
>> >>> >> >>>
>> >>> >> >>>
>> >>> >> >>>
>> >>> >> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
>> >>> >> >>> <basha.neteli...@gmail.com>
>> >>> >> >>> wrote:
>> >>> >> >>> > Hi matt,
>> >>> >> >>> >
>> >>> >> >>> > Thank You
>> >>> >> >>> > What is Pagination? Does it mean that I cannot use max_id for
>> >>> >> >>> > searching
>> >>> >> >>> > tweets. What does next_url and prev_url fields mean. I did
>> >>> >> >>> > not
>> >>> >> >>> > find
>> >>> >> >>> > next_url
>> >>> >> >>> > and prev_url in documentation. how can these two urls be used
>> >>> >> >>> > with
>> >>> >> >>> > max_id.
>> >>> >> >>> > Please explain with example if possible.
>> >>> >> >>> >
>> >>> >> >>> >
>> >>> >> >>> >
>> >>> >> >>> > Regards,
>> >>> >> >>> >
>> >>> >> >>> > Mahaboob Basha Shaik
>> >>> >> >>> > www.netelixir.com
>> >>> >> >>> > Making Search Work
>> >>> >> >>> >
>> >>> >> >>> >
>> >>> >> >>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford
>> >>> >> >>> > <m...@twitter.com>
>> >>> >> >>> > wrote:
>> >>> >> >>> >>
>> >>> >> >>> >> Hi Basha,
>> >>> >> >>> >>     The max_id is only intended to be used for pagination
>> >>> >> >>> >> via
>> >>> >> >>> >> the
>> >>> >> >>> >> next_url
>> >>> >> >>> >> and prev_url fields and is known not to work with since_id.
>> >>> >> >>> >> It
>> >>> >> >>> >> is
>> >>> >> >>> >> not
>> >>> >> >>> >> documented as a valid parameter because it's known to only
>> >>> >> >>> >> work
>> >>> >> >>> >> in
>> >>> >> >>> >> the
>> >>> >> >>> >> case
>> >>> >> >>> >> it was designed for. We added the max_id to prevent the
>> >>> >> >>> >> problem
>> >>> >> >>> >> where
>> >>> >> >>> >> you
>> >>> >> >>> >> click on 'Next' and page two starts with duplicates. Here's
>> >>> >> >>> >> the
>> >>> >> >>> >> scenario:
>> >>> >> >>> >>  1. Let's say you search for 'foo'.
>> >>> >> >>> >>  2. You wait 10 seconds, during which 5 people send tweets
>> >>> >> >>> >> containing
>> >>> >> >>> >> 'foo'.
>> >>> >> >>> >>  3. You click next and go to page=2 (or call page=2 via the
>> >>> >> >>> >> API)
>> >>> >> >>> >>    3.a. If we displayed results 21-40 the first 5 results
>> >>> >> >>> >> would
>> >>> >> >>> >> look
>> >>> >> >>> >> like
>> >>> >> >>> >> duplicates because they were "pushed down" by the 5 new
>> >>> >> >>> >> entries.
>> >>> >> >>> >>    3.b. If we append a max_id from the time you searched we
>> >>> >> >>> >> can
>> >>> >> >>> >> do
>> >>> >> >>> >> and
>> >>> >> >>> >> offset from the maximum and the new 5 entries are skipped.
>> >>> >> >>> >>   We use option 3.b. (as does twitter.com now) so you don't
>> >>> >> >>> >> see
>> >>> >> >>> >> duplicates. Since we wanted to provide the same data in the
>> >>> >> >>> >> API
>> >>> >> >>> >> as
>> >>> >> >>> >> the
>> >>> >> >>> >> UI we
>> >>> >> >>> >> added the next_url and prev_url members in our output.
>> >>> >> >>> >> Thanks;
>> >>> >> >>> >>   — Matt Sanford
>> >>> >> >>> >> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
>> >>> >> >>> >>
>> >>> >> >>> >> HI Matt,
>> >>> >> >>> >>
>> >>> >> >>> >> when Since_id and Max_id are given together, max_id is not
>> >>> >> >>> >> working.
>> >>> >> >>> >> This
>> >>> >> >>> >> query is ignoring max_id. But with only since _id its
>> >>> >> >>> >> working
>> >>> >> >>> >> fine.
>> >>> >> >>> >> Is
>> >>> >> >>> >> there
>> >>> >> >>> >> any problem when max_id and since_id are used together.
>> >>> >> >>> >>
>> >>> >> >>> >> Also please tell me what does max_id exactly mean and also
>> >>> >> >>> >> what
>> >>> >> >>> >> does it
>> >>> >> >>> >> return when we send a request.
>> >>> >> >>> >> Also tell me what the total returns.
>> >>> >> >>> >>
>> >>> >> >>> >>
>> >>> >> >>> >> Regards,
>> >>> >> >>> >>
>> >>> >> >>> >> Mahaboob Basha Shaik
>> >>> >> >>> >> www.netelixir.com
>> >>> >> >>> >> Making Search Work
>> >>> >> >>> >>
>> >>> >> >>> >>
>> >>> >> >>> >> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford
>> >>> >> >>> >> <m...@twitter.com>
>> >>> >> >>> >> wrote:
>> >>> >> >>> >>>
>> >>> >> >>> >>> Hi there,
>> >>> >> >>> >>>
>> >>> >> >>> >>>    Can you provide an example URL where since_id isn't
>> >>> >> >>> >>> working
>> >>> >> >>> >>> so
>> >>> >> >>> >>> I
>> >>> >> >>> >>> can
>> >>> >> >>> >>> try and reproduce the issue? As for language, the language
>> >>> >> >>> >>> identifier
>> >>> >> >>> >>> is not
>> >>> >> >>> >>> a 100% and sometimes makes mistakes. Hopefully not too many
>> >>> >> >>> >>> mistakes
>> >>> >> >>> >>> but it
>> >>> >> >>> >>> definitely does.
>> >>> >> >>> >>>
>> >>> >> >>> >>> Thanks;
>> >>> >> >>> >>>  — Matt Sanford / @mzsanford
>> >>> >> >>> >>>
>> >>> >> >>> >>> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
>> >>> >> >>> >>>
>> >>> >> >>> >>>>
>> >>> >> >>> >>>> Hi all;
>> >>> >> >>> >>>>
>> >>> >> >>> >>>> I see a few people complaining about the since_id not
>> >>> >> >>> >>>> working.
>> >>> >> >>> >>>>  I
>> >>> >> >>> >>>> too
>> >>> >> >>> >>>> have the same issue - I am currently storing the last
>> >>> >> >>> >>>> executed
>> >>> >> >>> >>>> id
>> >>> >> >>> >>>> and
>> >>> >> >>> >>>> having to check new tweets to make sure their id is
>> >>> >> >>> >>>> greater
>> >>> >> >>> >>>> than
>> >>> >> >>> >>>> my
>> >>> >> >>> >>>> last processed id as a temporary workaround.
>> >>> >> >>> >>>>
>> >>> >> >>> >>>> I have also noticed that the filter by language param also
>> >>> >> >>> >>>> doesn't
>> >>> >> >>> >>>> seem to be working 100% - I notice a few chinese tweets,
>> >>> >> >>> >>>> as
>> >>> >> >>> >>>> well
>> >>> >> >>> >>>> as
>> >>> >> >>> >>>> tweets having a null value for language...
>> >>> >> >>> >>>>
>> >>> >> >>> >>>
>> >>> >> >>> >>
>> >>> >> >>> >>
>> >>> >> >>> >
>> >>> >> >>> >
>> >>> >> >>
>> >>> >> >>
>> >>> >> >
>> >>> >
>> >>> >
>> >>
>> >>
>> >
>
>

Reply via email to