[twitter-dev] Re: Search: Resolution of Since, and howto avoid pulling redundant search results

2009-05-22 Thread Jeffrey Greenberg
i've got a working solution as far as pulling in tweets doing pretty much as
I said, except that.it will fail when there is a burst of tweets.  For some
very active search term, say something that exceeds the 1500 search limit
(15 pages x100tweets/pg) per day... Tweets will be missed.  For my
application, i the odds are that missing a small quantity of tweets isn't
earth shattering, but there's a *chance* it could be..  I think of this as a
twitter shortcoming...  Wondering if it's worth filing a low-priority bug
for it?



On Fri, May 22, 2009 at 1:24 PM, Doug Williams d...@twitter.com wrote:

 As the docs [1] state the correct format is since:-MM-DD which give you
 resolution down to a day.  Any further processing must be done on the client
 side. Given the constraints, utilizing a combination of since: and since_id
 sounds like a great solution.
 1. http://search.twitter.com/operators

 Thanks,
 Doug
 --

 Doug Williams
 Twitter Platform Support
 http://twitter.com/dougw





 On Fri, May 22, 2009 at 8:05 AM, Jeffrey Greenberg 
 jeffreygreenb...@gmail.com wrote:

 What is the resolution of the 'since' operator?  It appears to be by the
 day, but I'd sure like it to be by the minute or second.
 Can't seem to find this in the docs.

 The use case is that I want to minimize pulling searches results that i've
 already got.   My solution is to record the time of the last search and the
 last status_id, and ask for subsequent searches from the status_id. If that
 fails because it's out of range, I'll ask by the last search date.  Is this
 the way to go?


 http://www.tweettronics.com
 http://www.jeffrey-greenberg.com





[twitter-dev] Re: Search not returning all updates

2009-05-02 Thread Abraham Williams
The user might be flagged as spam. Those accounts don't show up as results
in search.

On Sat, May 2, 2009 at 15:37, Andy andykmj...@gmail.com wrote:


 I was missing some results in my API search, so I tried it on the
 twitter web site (http://search.twitter.com) and I'm having the same
 problems. I cannot find tweets from certain users, but can find from
 others. For example an update about the Hamptoms and another about
 Rome at http://twitter.com/drenert
 I tried various searches:

 Wondering what the Hamptons
 Hamptons
 #Rome
 #localyte
 localyte

 No luck with any. I can see his profile, so I know it's not a private
 account. Can anyone help me or tell me what I'm missing?

 Thanks!
 Andy




-- 
Abraham Williams | http://the.hackerconundrum.com
Hacker | http://abrah.am | http://twitter.com/abraham
Web608 | Community Evangelist | http://web608.org
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Milwaukee, WI, United States


[twitter-dev] Re: Search if a user profile exists based on his email id / name

2009-04-27 Thread king

There is an API for search,  cant we use it to search if users exists
or not. I think twitter gives you, I would like it to use on my
website as a widget where I can search for a user based on emailid or
username and then proceed to his twitter page.

Thank you

On Apr 25, 7:10 pm, Cameron Kaiser spec...@floodgap.com wrote:
  Is it possible to know if a user (profile) exists based on email id .

 No.

 --
  personal:http://www.cameronkaiser.com/--
   Cameron Kaiser * Floodgap Systems *www.floodgap.com* ckai...@floodgap.com
 -- DON'T PANIC! 
 ---


[twitter-dev] Re: Search if a user profile exists based on his email id / name

2009-04-27 Thread Cameron Kaiser

   Is it possible to know if a user (profile) exists based on email id .
 
  No.

 There is an API for search,  cant we use it to search if users exists
 or not. I think twitter gives you, I would like it to use on my
 website as a widget where I can search for a user based on emailid or
 username and then proceed to his twitter page.

The Search API does not allow searching for users by E-mail. As far as
username, you can simply query the username and see if it exists.

-- 
 personal: http://www.cameronkaiser.com/ --
  Cameron Kaiser * Floodgap Systems * www.floodgap.com * ckai...@floodgap.com
-- The idea is to die young as late as possible. -- Ashley Montagu 


[twitter-dev] Re: Search if a user profile exists based on his email id / name

2009-04-25 Thread Cameron Kaiser

 Is it possible to know if a user (profile) exists based on email id .

No.

-- 
 personal: http://www.cameronkaiser.com/ --
  Cameron Kaiser * Floodgap Systems * www.floodgap.com * ckai...@floodgap.com
-- DON'T PANIC! ---


[twitter-dev] Re: search API issue : source: doesn't work in some case

2009-04-23 Thread Matt Sanford

Hi Yusuke,

Unfortunately the source: operator as it is currently  
implemented has a few shortcomings. One is that it requires a query,  
and the second is that it can only search the last 7 days. This is a  
known performance issue and we're still looking for a way we can  
remove the restriction. I'll talk to Doug about updating the docs.


Thanks;
  – Matt Sanford / @mzsanford
  Twitter API Developer



On Apr 23, 2009, at 08:55 AM, Yusuke wrote:



Hi,

Today I noticed that my Twitter4J automated testcase for the search
API started to fail.

query: thisisarondomstringforatestcase returns 1 tweet.
http://search.twitter.com/search?q=thisisarondomstringforatestcase

But query: source:web thisisarondomstringforatestcase returns 0
tweet despite that the above tweet was posted via web.
http://search.twitter.com/search?q=source%3Aweb+thisisarondomstringforatestcase
It used to be returning one single tweet.

Is there any problem with the search API?

Best regards,
Yusuke




[twitter-dev] Re: Search friends timeline

2009-04-21 Thread mikejablonski

That was my plan for now. It just makes it harder to get the next X
friend status messages that have XYZ in them. I'm surprised this
isn't a more requested feature. Thanks!

On Apr 21, 8:28 am, Chad Etzel jazzyc...@gmail.com wrote:
 You can't.

 Just get the friends timeline and filter it client-side.  You'll have
 more granular control over the filtering that way anyway.

 -Chad

 On Tue, Apr 21, 2009 at 11:16 AM, mikejablonski mjablon...@gmail.com wrote:

  I've looked at the docs and searched the group, but I can't find any
  way to search your friends timeline. How can I get a filtered set of
  friend status messages based on a query? Is this possible? I know I
  could use the search API and throw away all my non-friends, but that
  won't work well for a lot of reasons. Thanks!


[twitter-dev] Re: Search friends timeline

2009-04-21 Thread Doug Williams
Integrating search into your friends_timeline is something we want to do in
the future. With the separation of the Search and REST APIs, it isn't a
trivial feature. For now, you have to parse out results from timelines
client side.

Doug Williams
Twitter API Support
http://twitter.com/dougw


On Tue, Apr 21, 2009 at 9:44 AM, mikejablonski mjablon...@gmail.com wrote:


 That was my plan for now. It just makes it harder to get the next X
 friend status messages that have XYZ in them. I'm surprised this
 isn't a more requested feature. Thanks!

 On Apr 21, 8:28 am, Chad Etzel jazzyc...@gmail.com wrote:
  You can't.
 
  Just get the friends timeline and filter it client-side.  You'll have
  more granular control over the filtering that way anyway.
 
  -Chad
 
  On Tue, Apr 21, 2009 at 11:16 AM, mikejablonski mjablon...@gmail.com
 wrote:
 
   I've looked at the docs and searched the group, but I can't find any
   way to search your friends timeline. How can I get a filtered set of
   friend status messages based on a query? Is this possible? I know I
   could use the search API and throw away all my non-friends, but that
   won't work well for a lot of reasons. Thanks!



[twitter-dev] Re: Search API returns HTTP 406 Not Acceptable

2009-04-21 Thread Ho John Lee

Never mind. I figured out the problems was I switched queries to
.xml instead of .json, and XML isn't one of the choices for the
search API.

On Apr 21, 12:43 pm, hjl hojohn@gmail.com wrote:
 I'm doing some testing this morning with the search API, which was
 working for a while but now is returning HTTP 406 Not Acceptable. Is
 this a symptom of the search API rate limiting? I ran a few queries
 with curl by hand, then ran a loop to see how far back the results
 pages go.

 The search API docs says rate limited requests should see 503 Service
 Unavailable, was wondering if it changed.

 I'll try it again in an hour or so and see if the search API starts
 responding again. But would still like to know if the response code
 for search rate limiting has changed.


[twitter-dev] Re: Search API Rate Limited even with OAuth

2009-04-20 Thread Doug Williams
Please see our article on rate limiting [1]. You will learn why the Search
API does not have a notion of authentication and how its rate limiting
differs from the REST API.

1. http://apiwiki.twitter.com/Rate-limiting

Thanks,
Doug Williams
Twitter API Support
http://twitter.com/dougw


On Mon, Apr 20, 2009 at 3:14 PM, Ammo Collector binhqtra...@gmail.comwrote:


 Hello,

 We're getting 503 rate limit responses from Search API even when
 passing in OAuth tokens.  The same tokens used on friends/followers/
 statuses go through fine so we know the tokens are good.  It appears
 we're getting IP limited even with OAuth...

 Klout.net



[twitter-dev] Re: Search by in_reply_to_status_id

2009-04-18 Thread Abraham Williams
http://code.google.com/p/twitter-api/issues/detail?id=142

On Sat, Apr 18, 2009 at 12:32, lordofthelake h1dd3n...@yahoo.it wrote:


 Hello.
 I started a project whose goal is to allow users to track the reaction
 of the crowd to their posts. This includes showing all the replies and
 retweets born as reaction to the original message, organizing the data
 in a threaded schema. While finding retweets of a particular message
 is fairly easy using the Search API (Query: RT @user some words of
 the message), finding and filtering all the replies can become a non-
 trivial work quite fast.

 While tracking the replies given directly to you isn't particularly
 hard, though not very efficient (find posts directed to you via search
 API -- to:user since_id:tweet id -- and then filter by
 in_reply_to_status_id), it becomes a nightmare when you want to track
 what your followers' friends have answered to the replies you got from
 your own followers.

 Example of conversation:
 Me: any idea about how to track the whole conversation originated from
 this tweet?
 MyFollower: @Me try posting in the twitter dev talk, maybe they can
 help you
 AFollowerOf_MyFollower: @MyFollower I know for sure those guys are
 very supportive

 Tracking MyFollower's response is not a big deal, even if the first
 fetch them all, then select those you need may not be the most
 efficient to implement for large volumes of tweets -- think to the
 power-users with thousands, if not millions, of followers -- since
 above certain limits, API usage caps (especially about number of
 tweets that can be retrieved at once) start becoming an issue.

 The real problem comes when you want to show in the threaded
 conversation AFollowerOf_MyFollower's tweet, too. Sure thing, you can
 use the same strategy as above (Search to:MyFollower, fetch all,
 filter by in_reply_to_status_id), but now instead of having to do a
 single query (to:Me) to retrieve the replies to your posts, you have
 to perform a fetching and filtering cycle for every person who took
 part to the conversation: the growth is exponential.

 A solution may be to allow searches by in_reply_to_status_id
 (something like reply:status id)... this would greatly lower the
 cost of looking for replies to your posts. Would it be possible to
 have such a feature exposed in future? Are there other, more efficient
 solutions, anybody can suggest to solve my problem efficiently?

 Thank you for the support. I apologize for the long post and my bad
 English, but I'm not a native English speaker and I tried to expose my
 problem as clearly as I could.
 -- Michele




-- 
Abraham Williams | http://the.hackerconundrum.com
Hacker | http://abrah.am | http://twitter.com/abraham
Web608 | Community Evangelist | http://web608.org
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Madison, Wisconsin, United States


[twitter-dev] Re: Search by in_reply_to_status_id

2009-04-18 Thread lordofthelake

Thanks for the link.

On Apr 18, 7:40 pm, Abraham Williams 4bra...@gmail.com wrote:
 http://code.google.com/p/twitter-api/issues/detail?id=142



 On Sat, Apr 18, 2009 at 12:32, lordofthelake h1dd3n...@yahoo.it wrote:

  Hello.
  I started a project whose goal is to allow users to track the reaction
  of the crowd to their posts. This includes showing all the replies and
  retweets born as reaction to the original message, organizing the data
  in a threaded schema. While finding retweets of a particular message
  is fairly easy using the Search API (Query: RT @user some words of
  the message), finding and filtering all the replies can become a non-
  trivial work quite fast.

  While tracking the replies given directly to you isn't particularly
  hard, though not very efficient (find posts directed to you via search
  API -- to:user since_id:tweet id -- and then filter by
  in_reply_to_status_id), it becomes a nightmare when you want to track
  what your followers' friends have answered to the replies you got from
  your own followers.

  Example of conversation:
  Me: any idea about how to track the whole conversation originated from
  this tweet?
  MyFollower: @Me try posting in the twitter dev talk, maybe they can
  help you
  AFollowerOf_MyFollower: @MyFollower I know for sure those guys are
  very supportive

  Tracking MyFollower's response is not a big deal, even if the first
  fetch them all, then select those you need may not be the most
  efficient to implement for large volumes of tweets -- think to the
  power-users with thousands, if not millions, of followers -- since
  above certain limits, API usage caps (especially about number of
  tweets that can be retrieved at once) start becoming an issue.

  The real problem comes when you want to show in the threaded
  conversation AFollowerOf_MyFollower's tweet, too. Sure thing, you can
  use the same strategy as above (Search to:MyFollower, fetch all,
  filter by in_reply_to_status_id), but now instead of having to do a
  single query (to:Me) to retrieve the replies to your posts, you have
  to perform a fetching and filtering cycle for every person who took
  part to the conversation: the growth is exponential.

  A solution may be to allow searches by in_reply_to_status_id
  (something like reply:status id)... this would greatly lower the
  cost of looking for replies to your posts. Would it be possible to
  have such a feature exposed in future? Are there other, more efficient
  solutions, anybody can suggest to solve my problem efficiently?

  Thank you for the support. I apologize for the long post and my bad
  English, but I'm not a native English speaker and I tried to expose my
  problem as clearly as I could.
  -- Michele

 --
 Abraham Williams |http://the.hackerconundrum.com
 Hacker |http://abrah.am|http://twitter.com/abraham
 Web608 | Community Evangelist |http://web608.org
 This email is: [ ] blogable [x] ask first [ ] private.
 Sent from Madison, Wisconsin, United States


[twitter-dev] Re: Search API throwing 404's

2009-04-17 Thread dean....@googlemail.com

Hi,

I've experienced a few 404's on search.json this morning.

Sometimes works, sometimes doesn't can't seem to pinpoint any
particular pattern to it happening.

--
Leu

On Apr 17, 5:11 am, Chad Etzel jazzyc...@gmail.com wrote:
 Just a quick update:

 The problem as popped up again. Doug is aware of this problem, and he
 says the servers are all stretched pretty thin (understandable).  Just
 curious if anyone else is seeing this as well?

 -Chad

 On Thu, Apr 16, 2009 at 11:30 PM, Chad Etzel jazzyc...@gmail.com wrote:
  Ok, dunno what was happening... I gave my server a swift kick with my
  steel-toed boot and all seems well again... weird.
  -Chad

  On Thu, Apr 16, 2009 at 10:27 PM, Doug Williams d...@twitter.com wrote:
  I just sent 200 queries through without seeing the 404. Are you still 
  seeing
  this?

  Doug Williams
  Twitter API Support
 http://twitter.com/dougw

  On Thu, Apr 16, 2009 at 6:32 PM, Chad Etzel jazzyc...@gmail.com wrote:

  Search is throwing 404's for search.json about every 7 or 8 requests...

  !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
  htmlhead
  title404 Not Found/title
  /headbody
  h1Not Found/h1
  pThe requested URL /search.json was not found on this server./p
  /body/html

  Also got a Forbidden return when trying to connect to
 http://search.twitter.com/about 10 minutes ago.

  -Chad


[twitter-dev] Re: Search result pagination bugs

2009-04-16 Thread stevenic

Thanks for the reply Matt...

Just as an FYI...

I updated my code to track duplicates and then did a sample run over a
5 minute period that once a minute paged in new results for the query
http filter:links  This resulted in about 11 pages of results each
minute and over the 11 pages I saw anywhere from 60 - 150 duplicates
so it's not just 3 or 4.  My concern isn't really around the extra
updates it's the fact that sometimes updates are missing.

Anyway... It sounds like you guys are working on it and I just thought
I'd share that data point with you.

-steve


[twitter-dev] Re: Search result pagination bugs

2009-04-16 Thread Chad Etzel

the query http filter:links (which is a bit redundant) is such a
high volume query that I would doubt that the search servers would
ever be able to keep in sync even when things were running up to
speed.

Try with a less traffic'd query like twitter

-Chad

On Thu, Apr 16, 2009 at 6:55 PM, stevenic ick...@gmail.com wrote:

 Thanks for the reply Matt...

 Just as an FYI...

 I updated my code to track duplicates and then did a sample run over a
 5 minute period that once a minute paged in new results for the query
 http filter:links  This resulted in about 11 pages of results each
 minute and over the 11 pages I saw anywhere from 60 - 150 duplicates
 so it's not just 3 or 4.  My concern isn't really around the extra
 updates it's the fact that sometimes updates are missing.

 Anyway... It sounds like you guys are working on it and I just thought
 I'd share that data point with you.

 -steve



[twitter-dev] Re: Search result pagination bugs

2009-04-16 Thread stevenic

So my project is a sort of tweetmeme or twitturly type thing where I'm
looking to collect a sample of the links being shared through
Twitter.  Unlike those projects I don't have a firehose so I have to
rely on search.  Fortunatly, I don't really need to see every link for
my project just a representive sample.

The actual query I'm using is http OR www filter:links where the
filter:links constraint helps make sure I exclude tweets like can't
get http GET to work  I don't really care about those.

Agreed with this query being a high volume query so maybe it'll never
be in sync but that's ok... Now I'm just ignoring the dupes.  And to
be clear, I have no intention of trying to keep up and use search as a
poor mans firehose.  What ever rate you guys are comfortable with me
hitting you at is what I'll do.  If that's one request/minute so be
it.  Just wanted to get the pagenation working so that I could better
control things and that's when I noticed the dupes.

-steve
(Microsoft Research)


[twitter-dev] Re: Search result pagination bugs

2009-04-16 Thread Chad Etzel

I can't speak for twitter on the permission to do that side, but
that technique will work just fine, so you should be good to go
technically.
-chad

On Thu, Apr 16, 2009 at 9:34 PM, stevenic ick...@gmail.com wrote:

 Matt...  Another thought I just had...

 As Chad points out, with my particular query being high volume its
 realistic to think that I'm always going to risk seeing duplicates if
 I try to query for results in real time due to replication lag between
 your servers.  But I see how your using max_id in the paging stuff and
 I don't really need real time results so it seems like I should be
 able to use an ID that's 30 - 60 minutes old and do all of my queries
 using max_id instead of since_id.  In theory this would have me
 trailing the edge of new results coming into the index by 30 - 60
 minutes but it would give the servers more time to replicate so it
 seems like there'd be less of a chance I'd encounter dupes or missing
 entries.

 If that approach would work (and you would know) I'd just want to make
 sure you'd be ok with me using max_id instead of since_id given that
 max_id isn't documented

 -steve

 On Apr 16, 7:58 am, Matt Sanford m...@twitter.com wrote:
 Hi all,

     There was a problem yesterday with several of the search back-ends
 falling behind. This meant that if your page=1 and page=2 queries hit
 different hosts they could return results that don't line up. If your
 page=2 query hit a host with more lag you would miss results, and if
 it hit a host that was more up-to-date you would see duplicates. We're
 working on fixing this issues and trying to find a way to prevent
 incorrect pagination in the future. Sorry for the delay in replying
 but I was focusing all of my attention on fixing the issue and had to
 let email wait.

 Thanks;
    — Matt Sanford / @mzsanford

 On Apr 15, 2009, at 09:29 PM, stevenic wrote:





  Ok... So I think I know what's going on.  Well I don't know what's
  causing the bug obviously but I think I've narrowed down where it
  is...

  I just issued the Page 1 or previous query for the above example and
  the ID's don't match the ID's from the original query.  There are
  extra rows that come back... 3 to be exact.  So the pagination queries
  are working fine.  It's the initial query that's busted.  It looks
  like that when you do a pagenation query you get back all rows
  matching the filter but a query without max_id sometimes drops rows.
  Well in my case it seems to drop rows everytime... This should get
  fixed...

  *
  for:  http://search.twitter.com/search.atom?max_id=1530963910page=1q=http

  feed xmlns:google=http://base.google.com/ns/1.0; xml:lang=en-US
  xmlns:openSearch=http://a9.com/-/spec/opensearch/1.1/; xmlns=http://
 www.w3.org/2005/Atom xmlns:twitter=http://api.twitter.com/;
   link type=application/atom+xml rel=self href=http://
  search.twitter.com/search.atom?max_id=1530963910page=1q=http /
   twitter:warningadjusted since_id, it was older than allowed/
  twitter:warning
   updated2009-04-16T03:25:30Z/updated
   openSearch:itemsPerPage15/openSearch:itemsPerPage
   openSearch:languageen/openSearch:language
   link type=application/atom+xml rel=next href=http://
  search.twitter.com/search.atom?max_id=1530963910page=2q=http /

    ...Removed...

  entry
   idtag:search.twitter.com,2005:1530963910/id
   published2009-04-16T03:25:30Z/published
  /entry
  entry
   idtag:search.twitter.com,2005:1530963908/id
   published2009-04-16T03:25:32Z/published

   ...Where Did This Come From?...

  /entry
  entry
   idtag:search.twitter.com,2005:1530963898/id
   published2009-04-16T03:25:30Z/published

   ...And This?...

  /entry
   idtag:search.twitter.com,2005:1530963896/id
   idtag:search.twitter.com,2005:1530963895/id
   idtag:search.twitter.com,2005:1530963894/id
  entry
   idtag:search.twitter.com,2005:1530963892/id
   published2009-04-16T03:25:32Z/published

   ...And This?...

  /entry
   idtag:search.twitter.com,2005:1530963881/id
   idtag:search.twitter.com,2005:1530963865/id
   idtag:search.twitter.com,2005:1530963860/id
   idtag:search.twitter.com,2005:1530963834/id
   idtag:search.twitter.com,2005:1530963833/id
   idtag:search.twitter.com,2005:1530963829/id
   idtag:search.twitter.com,2005:1530963827/id
   idtag:search.twitter.com,2005:1530963812/id
  /feed- Hide quoted text -

 - Show quoted text -



[twitter-dev] Re: Search API throwing 404's

2009-04-16 Thread Chad Etzel

Ok, dunno what was happening... I gave my server a swift kick with my
steel-toed boot and all seems well again... weird.
-Chad

On Thu, Apr 16, 2009 at 10:27 PM, Doug Williams d...@twitter.com wrote:
 I just sent 200 queries through without seeing the 404. Are you still seeing
 this?

 Doug Williams
 Twitter API Support
 http://twitter.com/dougw


 On Thu, Apr 16, 2009 at 6:32 PM, Chad Etzel jazzyc...@gmail.com wrote:

 Search is throwing 404's for search.json about every 7 or 8 requests...

 !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
 htmlhead
 title404 Not Found/title
 /headbody
 h1Not Found/h1
 pThe requested URL /search.json was not found on this server./p
 /body/html

 Also got a Forbidden return when trying to connect to
 http://search.twitter.com/ about 10 minutes ago.

 -Chad




[twitter-dev] Re: Search API throwing 404's

2009-04-16 Thread Chad Etzel

Just a quick update:

The problem as popped up again. Doug is aware of this problem, and he
says the servers are all stretched pretty thin (understandable).  Just
curious if anyone else is seeing this as well?

-Chad

On Thu, Apr 16, 2009 at 11:30 PM, Chad Etzel jazzyc...@gmail.com wrote:
 Ok, dunno what was happening... I gave my server a swift kick with my
 steel-toed boot and all seems well again... weird.
 -Chad

 On Thu, Apr 16, 2009 at 10:27 PM, Doug Williams d...@twitter.com wrote:
 I just sent 200 queries through without seeing the 404. Are you still seeing
 this?

 Doug Williams
 Twitter API Support
 http://twitter.com/dougw


 On Thu, Apr 16, 2009 at 6:32 PM, Chad Etzel jazzyc...@gmail.com wrote:

 Search is throwing 404's for search.json about every 7 or 8 requests...

 !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
 htmlhead
 title404 Not Found/title
 /headbody
 h1Not Found/h1
 pThe requested URL /search.json was not found on this server./p
 /body/html

 Also got a Forbidden return when trying to connect to
 http://search.twitter.com/ about 10 minutes ago.

 -Chad





[twitter-dev] Re: Search result pagination bugs

2009-04-15 Thread Chad Etzel

It would be helpful if you could give some example output/results
where you are seeing duplicates across pages.  I have spent a long
long time with the Search API and haven't ever had this problem (or
maybe I have and never noticed it).

-Chad

On Wed, Apr 15, 2009 at 9:07 PM, steve ick...@gmail.com wrote:

 I've been using the Search API in a project and its been working very
 reliably.  So today I decided to add support for pagination so I could
 pull in more results and I think I've identified a couple of bugs with
 the pagination code.

 Bug 1)

 The first few results of Page 2 for a query are sometimes duplicates.
 To verify this do the following:

   1. Execute the query: 
 http://search.twitter.com/search.atom?lang=enq=httprpp=100
   2. Grab the next link from the results and execute that.
   3. Compare the ID's at the end of set one with the ID's at the
 begining of set 2.  They sometimes overlap.


 Bug 2)

 The second bug may be the cause of the 1st bug.  The link you get for
 next in a result set is missing the lang=en query param.  So you
 end up getting non-english items in your result set.  You can manually
 add the lang=en param to your query and while you still get dupes
 you get less.  If you do this though you then start getting a warning
 in the result set about an adjusted since_id.

 What's scarier though is that the result set seemed to get wierd on me
 if I added the lang param and requested pages too fast.  By that I
 mean I would sometimes get results for Page 2 that were (time wise)
 hours before my original Since ID so my code would just stop
 requesting pages since it assumed it had reached the end of the set.
 The scary part... Adding around a 2 seconds sleep between queries
 seemed to make this issue go away...


 In general the pagination stuff with the next link doesn't seem very
 reliable to me.  You do seem to get less dupes then just calling
 search and incrementing the page number.  But I'm still seeing dupes,
 results for the wrong language, and sometimes totally wierd results.

 -steve



[twitter-dev] Re: Search result pagination bugs

2009-04-15 Thread stevenic

Sure...  It repros for me every time in IE using the steps I outlined
above.  Do a query for lang=enq=http.  Open the next link in a
new tab of your browser and compare the ID's.

So I just did this from my home PC and here's the condensed output.
Notice that on Page 2 not only do I get 3 dupes but I even get a
result that should have been on Page 1... I hadn't seen that one
before but I'll assume that maybe a different server serviced each
request and they're not synced.


*
for: http://search.twitter.com/search.atom?lang=enq=http


feed xmlns:google=http://base.google.com/ns/1.0; xml:lang=en-US
xmlns:openSearch=http://a9.com/-/spec/opensearch/1.1/; xmlns=http://
www.w3.org/2005/Atom xmlns:twitter=http://api.twitter.com/;
  link type=application/atom+xml rel=self href=http://
search.twitter.com/search.atom?lang=enq=http /
  twitter:warningadjusted since_id, it was older than allowed/
twitter:warning
  updated2009-04-16T03:25:30Z/updated
  openSearch:itemsPerPage15/openSearch:itemsPerPage
  openSearch:languageen/openSearch:language
  link type=application/atom+xml rel=next href=http://
search.twitter.com/search.atom?max_id=1530963910page=2q=http /

  ...removed...

entry
  idtag:search.twitter.com,2005:1530963910/id
  published2009-04-16T03:25:30Z/published

  ...removed...

/entry
entry
  idtag:search.twitter.com,2005:1530963896/id
/entry
  idtag:search.twitter.com,2005:1530963895/id
  idtag:search.twitter.com,2005:1530963894/id
  idtag:search.twitter.com,2005:1530963881/id
  idtag:search.twitter.com,2005:1530963865/id
  idtag:search.twitter.com,2005:1530963860/id
  idtag:search.twitter.com,2005:1530963834/id
  idtag:search.twitter.com,2005:1530963833/id
  idtag:search.twitter.com,2005:1530963829/id
  idtag:search.twitter.com,2005:1530963827/id
  idtag:search.twitter.com,2005:1530963812/id
  idtag:search.twitter.com,2005:1530963811/id
  idtag:search.twitter.com,2005:1530963796/id
  idtag:search.twitter.com,2005:1530963786/id
/feed


*
for:  http://search.twitter.com/search.atom?max_id=1530963910page=2q=http

feed xmlns:google=http://base.google.com/ns/1.0; xml:lang=en-US
xmlns:openSearch=http://a9.com/-/spec/opensearch/1.1/; xmlns=http://
www.w3.org/2005/Atom xmlns:twitter=http://api.twitter.com/;
  link type=application/atom+xml rel=self href=http://
search.twitter.com/search.atom?max_id=1530963910page=2q=http /
  updated2009-04-16T03:25:31Z/updated
  openSearch:itemsPerPage15/openSearch:itemsPerPage
  openSearch:languageen/openSearch:language
  link type=application/atom+xml rel=previous href=http://
search.twitter.com/search.atom?max_id=1530963910page=1q=http /
  link type=application/atom+xml rel=next href=http://
search.twitter.com/search.atom?max_id=1530963910page=3q=http /

   ...Removed...

entry
  idtag:search.twitter.com,2005:1530963811/id
  published2009-04-16T03:25:31Z/published

   ...Duplicate 1...

/entry
entry
  idtag:search.twitter.com,2005:1530963803/id
  published2009-04-16T03:25:29Z/published
  twitter:langen/twitter:lang

   ...Not Even In Previous Page...

/entry
entry
  idtag:search.twitter.com,2005:1530963796/id
  published2009-04-16T03:25:29Z/published

   ...Duplicate 2...

/entry
entry
  idtag:search.twitter.com,2005:1530963786/id
  published2009-04-16T03:25:31Z/published

   ...Duplicate 3...

/entry
entry
  idtag:search.twitter.com,2005:1530963777/id

   ...First New Result (save the one above)...

/entry
  idtag:search.twitter.com,2005:1530963755/id
  idtag:search.twitter.com,2005:1530963732/id
  idtag:search.twitter.com,2005:1530963725/id
  idtag:search.twitter.com,2005:1530963718/id
  idtag:search.twitter.com,2005:1530963710/id
  idtag:search.twitter.com,2005:1530963709/id
  idtag:search.twitter.com,2005:1530963706/id
  idtag:search.twitter.com,2005:1530963699/id
  idtag:search.twitter.com,2005:1530963698/id
  idtag:search.twitter.com,2005:1530963690/id
/feed


[twitter-dev] Re: Search result pagination bugs

2009-04-15 Thread stevenic

Ok... So I think I know what's going on.  Well I don't know what's
causing the bug obviously but I think I've narrowed down where it
is...

I just issued the Page 1 or previous query for the above example and
the ID's don't match the ID's from the original query.  There are
extra rows that come back... 3 to be exact.  So the pagination queries
are working fine.  It's the initial query that's busted.  It looks
like that when you do a pagenation query you get back all rows
matching the filter but a query without max_id sometimes drops rows.
Well in my case it seems to drop rows everytime... This should get
fixed...


*
for:  http://search.twitter.com/search.atom?max_id=1530963910page=1q=http

feed xmlns:google=http://base.google.com/ns/1.0; xml:lang=en-US
xmlns:openSearch=http://a9.com/-/spec/opensearch/1.1/; xmlns=http://
www.w3.org/2005/Atom xmlns:twitter=http://api.twitter.com/;
  link type=application/atom+xml rel=self href=http://
search.twitter.com/search.atom?max_id=1530963910page=1q=http /
  twitter:warningadjusted since_id, it was older than allowed/
twitter:warning
  updated2009-04-16T03:25:30Z/updated
  openSearch:itemsPerPage15/openSearch:itemsPerPage
  openSearch:languageen/openSearch:language
  link type=application/atom+xml rel=next href=http://
search.twitter.com/search.atom?max_id=1530963910page=2q=http /

   ...Removed...

entry
  idtag:search.twitter.com,2005:1530963910/id
  published2009-04-16T03:25:30Z/published
/entry
entry
  idtag:search.twitter.com,2005:1530963908/id
  published2009-04-16T03:25:32Z/published

  ...Where Did This Come From?...

/entry
entry
  idtag:search.twitter.com,2005:1530963898/id
  published2009-04-16T03:25:30Z/published

  ...And This?...

/entry
  idtag:search.twitter.com,2005:1530963896/id
  idtag:search.twitter.com,2005:1530963895/id
  idtag:search.twitter.com,2005:1530963894/id
entry
  idtag:search.twitter.com,2005:1530963892/id
  published2009-04-16T03:25:32Z/published

  ...And This?...

/entry
  idtag:search.twitter.com,2005:1530963881/id
  idtag:search.twitter.com,2005:1530963865/id
  idtag:search.twitter.com,2005:1530963860/id
  idtag:search.twitter.com,2005:1530963834/id
  idtag:search.twitter.com,2005:1530963833/id
  idtag:search.twitter.com,2005:1530963829/id
  idtag:search.twitter.com,2005:1530963827/id
  idtag:search.twitter.com,2005:1530963812/id
/feed



[twitter-dev] Re: Search queries not working

2009-04-13 Thread Alex Payne

Yes. Queries are limited to 140 characters.

Basha Shaik wrote:

Hi,

Is there any Length Limit in the query I pass in search API?

Regards,

Mahaboob Basha Shaik
www.netelixir.com http://www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 10:27 AM, Basha Shaik 
basha.neteli...@gmail.com mailto:basha.neteli...@gmail.com wrote:


Hi Chad,
No duplicates are there with this.
Thank You

Regards,

Mahaboob Basha Shaik
www.netelixir.com http://www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 7:29 AM, Basha Shaik
basha.neteli...@gmail.com mailto:basha.neteli...@gmail.com wrote:

Hi chad,

Thank you. I was trying for a query which has only 55 tweets
and i have kept 100 as rpp . so i was not getting next_page.
when i decreased rpp to 20 and tried i got now. thank you very
much. i Will check if any Duplicates occur with these and let
you know.


Regards,

Mahaboob Basha Shaik
www.netelixir.com http://www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 7:06 AM, Chad Etzel
jazzyc...@gmail.com mailto:jazzyc...@gmail.com wrote:

next_page




--
Alex Payne - API Lead, Twitter, Inc.
http://twitter.com/al3x



[twitter-dev] Re: search by link

2009-04-10 Thread Carlos Crosetti
Squeak Smalltalk Twitter Client at

http://code.google.com/p/twitter-client/


[twitter-dev] Re: search by link

2009-04-10 Thread Doug Williams
Search only by source is not supported.

Doug Williams
Twitter API Support
http://twitter.com/dougw


On Fri, Apr 10, 2009 at 10:38 AM, joop23 joo...@gmail.com wrote:


 I was hoping to find a way to search for source through the search api
 without having to pass in some text.  Just source through api.

 On Apr 9, 11:48 am, Chad Etzel jazzyc...@gmail.com wrote:
  It should be noted that you can't just search for a source alone, you
  must pass in some sort of query with it.  So you can't really get all
  tweets from a particular source...
 
  One interesting way to use the source data handed back by the search
  API is to gauge market share for certain keywords/phrases.  I
  created a tool here to do this:
 
  http://tweetgrid.com/sources
 
  it's interesting to search for different people (e.g. from:user) to
  see what clients they are frequently using...
 
  -Chad
 
  On Thu, Apr 9, 2009 at 2:37 PM, Doug Williams d...@twitter.com wrote:
   The search twitter source:tweetdeck [1] will return any tweet with
   'twitter' from the source with parameter 'tweetdeck'. Add your
 appropriate
   format to the URL and you're good to go!
 
   1.http://search.twitter.com/search?q=twitter+source%3Atweetdeck
 
   Doug Williams
   Twitter API Support
  http://twitter.com/dougw
 
   On Thu, Apr 9, 2009 at 11:22 AM, joop23 joo...@gmail.com wrote:
 
   Hello,
 
   Is there a way to search by link on the status message?  For instance,
   I'd like to pull all statuses submitted by TweetDeck application.
 
   thank you



[twitter-dev] Re: search by link

2009-04-09 Thread Abraham Williams
http://search.twitter.com/operators

On Thu, Apr 9, 2009 at 13:22, joop23 joo...@gmail.com wrote:


 Hello,

 Is there a way to search by link on the status message?  For instance,
 I'd like to pull all statuses submitted by TweetDeck application.

 thank you




-- 
Abraham Williams | Hacker | http://abrah.am
@poseurtech | http://the.hackerconundrum.com
Web608 | Community Evangelist | http://web608.org
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Madison, Wisconsin, United States


[twitter-dev] Re: search by link

2009-04-09 Thread Doug Williams
The search twitter source:tweetdeck [1] will return any tweet with
'twitter' from the source with parameter 'tweetdeck'. Add your appropriate
format to the URL and you're good to go!

1. http://search.twitter.com/search?q=twitter+source%3Atweetdeck


Doug Williams
Twitter API Support
http://twitter.com/dougw


On Thu, Apr 9, 2009 at 11:22 AM, joop23 joo...@gmail.com wrote:


 Hello,

 Is there a way to search by link on the status message?  For instance,
 I'd like to pull all statuses submitted by TweetDeck application.

 thank you



[twitter-dev] Re: search by link

2009-04-09 Thread Chad Etzel

It should be noted that you can't just search for a source alone, you
must pass in some sort of query with it.  So you can't really get all
tweets from a particular source...

One interesting way to use the source data handed back by the search
API is to gauge market share for certain keywords/phrases.  I
created a tool here to do this:

http://tweetgrid.com/sources

it's interesting to search for different people (e.g. from:user) to
see what clients they are frequently using...

-Chad

On Thu, Apr 9, 2009 at 2:37 PM, Doug Williams d...@twitter.com wrote:
 The search twitter source:tweetdeck [1] will return any tweet with
 'twitter' from the source with parameter 'tweetdeck'. Add your appropriate
 format to the URL and you're good to go!

 1. http://search.twitter.com/search?q=twitter+source%3Atweetdeck


 Doug Williams
 Twitter API Support
 http://twitter.com/dougw


 On Thu, Apr 9, 2009 at 11:22 AM, joop23 joo...@gmail.com wrote:

 Hello,

 Is there a way to search by link on the status message?  For instance,
 I'd like to pull all statuses submitted by TweetDeck application.

 thank you




[twitter-dev] Re: Search API Refresh Rate

2009-04-08 Thread peterhough

Perfect, thanks Matt

On Apr 8, 5:27 pm, Matt Sanford m...@twitter.com wrote:
 Hi Pete,

      Every 5 seconds is well below the rate limit and seems like a  
 good rate for reasonably quick responses. It sounds like you're doing  
 the same query each time so that should be fine.

      For people doing requests based on many different queries I  
 recommend that they query less often for searches that have no results  
 than for those that do. By using a back-off you can keep up to date on  
 queries that are hot but not waste cycles requesting queries that very  
 rarely change. Check out the way we do it on search.twitter.com 
 athttp://search.twitter.com/javascripts/search/refresher.js

 Thanks;
    — Matt Sanford / @mzsanford

 On Apr 8, 2009, at 02:30 AM, peterhough wrote:



  Hello!

  I'm developing an application which needs to constantly request a
  search API result. I'm pushing through a since_id to try to help
  minimise the load on the servers. My question is, what is the optimum
  time limit to loop the API requests? My application will need to act
  upon the result of the search pretty much instantly.

  I currently have the script requesting a search API result every 5
  seconds. Will this hammer your servers too much?

  Do you know the average time third party clients reload tweets? Are
  there any guidelines for this? As this would have a factor in when my
  applications actions are seen and so the need to request a search
  result refresh

  Thanks,
  Pete


[twitter-dev] Re: Search API, Multiple Hashtags

2009-04-05 Thread Chad Etzel

Yes, this is possible.  Have you actually tried it yet?  Make sure to
use capital OR between the hashtags.

http://search.twitter.com/search?q=%23followfriday+OR+%23pawpawty+OR+%23gno

-chad

On Sun, Apr 5, 2009 at 2:36 PM, Matt matthewk...@gmail.com wrote:

 Is it possible with the current search api to search for multiple
 hashtags? I'm looking to do an OR search which will look for up to 3
 hashtags.



[twitter-dev] Re: Search API, Multiple Hashtags

2009-04-05 Thread Matt

Thanks. Wasn't aware I could pass along operators.

On Apr 5, 2:41 pm, Chad Etzel jazzyc...@gmail.com wrote:
 Yes, this is possible.  Have you actually tried it yet?  Make sure to
 use capital OR between the hashtags.

 http://search.twitter.com/search?q=%23followfriday+OR+%23pawpawty+OR+...

 -chad

 On Sun, Apr 5, 2009 at 2:36 PM, Matt matthewk...@gmail.com wrote:

  Is it possible with the current search api to search for multiple
  hashtags? I'm looking to do an OR search which will look for up to 3
  hashtags.


[twitter-dev] Re: Search queries not working

2009-04-04 Thread Chad Etzel

Are you using the .atom or .json API feed?  I am only familiar with
the .json feed.
-Chad

On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik basha.neteli...@gmail.com wrote:
 Hi Chad,

 how can we use next_page in the url we request. where can we get the url
 we need to pass.

 Regards,

 Mahaboob Basha Shaik
 www.netelixir.com
 Making Search Work


 On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel jazzyc...@gmail.com wrote:

 I'm not sure of these next_url and prev_url fields (never seen
 them anywhere), but at least in the json data there is a next_page
 field which uses ?page=_max_id=__ already prefilled for you.
 This should definitely avoid the duplicate tweet issue.  I've never
 had to do any client-side duplicate filtering when using the correct
 combination of page,max_id, and rpp values...

 If you give very specific examples (the actual URL data would be
 handy) where you are seeing duplicates between pages, we can probably
 help sort this out.

 -Chad

 On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams d...@twitter.com wrote:
 
  The use of prev_url and next_url will take care of step 1 from your
  flow described above. Specifically, next_url will give your
  application the URI to contact to get the next page of results.
 
  Combining max_id and next_url usage will not solve the duplicate
  problem. To overcome that issue, you will have to simply strip the
  duplicate tweets on the client-side.
 
  Thanks,
  Doug Williams
  Twitter API Support
  http://twitter.com/dougw
 
 
 
  On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik basha.neteli...@gmail.com
  wrote:
  HI,
 
  Can you give me an example how i can use prev_url and next_url with
  max_id.
 
 
 
  No I am following below process to search
  1. Set rpp=100 and retrieve 15 pages search results by incrementing
  the param 'page'
  2. Get the id of the last status on page 15 and set that as the max_id
  for the next query
  3. If we have more results, go to step 1
 
  here i got duplicate. 100th record in page 1 was same as 1st record in
  page
  2.
 
  I understood the reason why i got the duplicates from matts previous
  mail.
 
  Will this problem solve if i use max_id with prev_url and next_url?
   How can the duplicate problem be solved
 
 
  Regards,
 
  Mahaboob Basha Shaik
  www.netelixir.com
  Making Search Work
 
 
  On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams d...@twitter.com wrote:
 
  Basha,
  Pagination is defined well here [1].
 
  The next_url and prev_url fields give your client HTTP URIs to move
  forward and backward through the result set. You can use them to page
  through search results.
 
  I have some work to do on the search docs and I'll add field
  definitions then as well.
 
  1. http://en.wikipedia.org/wiki/Pagination_(web)
 
  Doug Williams
  Twitter API Support
  http://twitter.com/dougw
 
 
 
  On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
  basha.neteli...@gmail.com
  wrote:
   Hi matt,
  
   Thank You
   What is Pagination? Does it mean that I cannot use max_id for
   searching
   tweets. What does next_url and prev_url fields mean. I did not find
   next_url
   and prev_url in documentation. how can these two urls be used with
   max_id.
   Please explain with example if possible.
  
  
  
   Regards,
  
   Mahaboob Basha Shaik
   www.netelixir.com
   Making Search Work
  
  
   On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford m...@twitter.com
   wrote:
  
   Hi Basha,
       The max_id is only intended to be used for pagination via the
   next_url
   and prev_url fields and is known not to work with since_id. It is
   not
   documented as a valid parameter because it's known to only work in
   the
   case
   it was designed for. We added the max_id to prevent the problem
   where
   you
   click on 'Next' and page two starts with duplicates. Here's the
   scenario:
    1. Let's say you search for 'foo'.
    2. You wait 10 seconds, during which 5 people send tweets
   containing
   'foo'.
    3. You click next and go to page=2 (or call page=2 via the API)
      3.a. If we displayed results 21-40 the first 5 results would
   look
   like
   duplicates because they were pushed down by the 5 new entries.
      3.b. If we append a max_id from the time you searched we can do
   and
   offset from the maximum and the new 5 entries are skipped.
     We use option 3.b. (as does twitter.com now) so you don't see
   duplicates. Since we wanted to provide the same data in the API as
   the
   UI we
   added the next_url and prev_url members in our output.
   Thanks;
     — Matt Sanford
   On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
  
   HI Matt,
  
   when Since_id and Max_id are given together, max_id is not working.
   This
   query is ignoring max_id. But with only since _id its working fine.
   Is
   there
   any problem when max_id and since_id are used together.
  
   Also please tell me what does max_id exactly mean and also what
   does it
   return when we send a request.
   Also tell me what the total returns.
  
  
   

[twitter-dev] Re: Search queries not working

2009-04-04 Thread Basha Shaik
I am using json

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel jazzyc...@gmail.com wrote:


 Are you using the .atom or .json API feed?  I am only familiar with
 the .json feed.
 -Chad

 On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik basha.neteli...@gmail.com
 wrote:
  Hi Chad,
 
  how can we use next_page in the url we request. where can we get the
 url
  we need to pass.
 
  Regards,
 
  Mahaboob Basha Shaik
  www.netelixir.com
  Making Search Work
 
 
  On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel jazzyc...@gmail.com wrote:
 
  I'm not sure of these next_url and prev_url fields (never seen
  them anywhere), but at least in the json data there is a next_page
  field which uses ?page=_max_id=__ already prefilled for you.
  This should definitely avoid the duplicate tweet issue.  I've never
  had to do any client-side duplicate filtering when using the correct
  combination of page,max_id, and rpp values...
 
  If you give very specific examples (the actual URL data would be
  handy) where you are seeing duplicates between pages, we can probably
  help sort this out.
 
  -Chad
 
  On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams d...@twitter.com wrote:
  
   The use of prev_url and next_url will take care of step 1 from your
   flow described above. Specifically, next_url will give your
   application the URI to contact to get the next page of results.
  
   Combining max_id and next_url usage will not solve the duplicate
   problem. To overcome that issue, you will have to simply strip the
   duplicate tweets on the client-side.
  
   Thanks,
   Doug Williams
   Twitter API Support
   http://twitter.com/dougw
  
  
  
   On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik 
 basha.neteli...@gmail.com
   wrote:
   HI,
  
   Can you give me an example how i can use prev_url and next_url with
   max_id.
  
  
  
   No I am following below process to search
   1. Set rpp=100 and retrieve 15 pages search results by incrementing
   the param 'page'
   2. Get the id of the last status on page 15 and set that as the
 max_id
   for the next query
   3. If we have more results, go to step 1
  
   here i got duplicate. 100th record in page 1 was same as 1st record
 in
   page
   2.
  
   I understood the reason why i got the duplicates from matts previous
   mail.
  
   Will this problem solve if i use max_id with prev_url and next_url?
How can the duplicate problem be solved
  
  
   Regards,
  
   Mahaboob Basha Shaik
   www.netelixir.com
   Making Search Work
  
  
   On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams d...@twitter.com
 wrote:
  
   Basha,
   Pagination is defined well here [1].
  
   The next_url and prev_url fields give your client HTTP URIs to move
   forward and backward through the result set. You can use them to
 page
   through search results.
  
   I have some work to do on the search docs and I'll add field
   definitions then as well.
  
   1. 
   http://en.wikipedia.org/wiki/Pagination_(web)http://en.wikipedia.org/wiki/Pagination_%28web%29
  
   Doug Williams
   Twitter API Support
   http://twitter.com/dougw
  
  
  
   On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
   basha.neteli...@gmail.com
   wrote:
Hi matt,
   
Thank You
What is Pagination? Does it mean that I cannot use max_id for
searching
tweets. What does next_url and prev_url fields mean. I did not
 find
next_url
and prev_url in documentation. how can these two urls be used with
max_id.
Please explain with example if possible.
   
   
   
Regards,
   
Mahaboob Basha Shaik
www.netelixir.com
Making Search Work
   
   
On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford m...@twitter.com
wrote:
   
Hi Basha,
The max_id is only intended to be used for pagination via the
next_url
and prev_url fields and is known not to work with since_id. It is
not
documented as a valid parameter because it's known to only work
 in
the
case
it was designed for. We added the max_id to prevent the problem
where
you
click on 'Next' and page two starts with duplicates. Here's the
scenario:
 1. Let's say you search for 'foo'.
 2. You wait 10 seconds, during which 5 people send tweets
containing
'foo'.
 3. You click next and go to page=2 (or call page=2 via the API)
   3.a. If we displayed results 21-40 the first 5 results would
look
like
duplicates because they were pushed down by the 5 new entries.
   3.b. If we append a max_id from the time you searched we can
 do
and
offset from the maximum and the new 5 entries are skipped.
  We use option 3.b. (as does twitter.com now) so you don't see
duplicates. Since we wanted to provide the same data in the API
 as
the
UI we
added the next_url and prev_url members in our output.
Thanks;
  — Matt Sanford
On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
   
HI Matt,
   
 

[twitter-dev] Re: Search queries not working

2009-04-04 Thread Chad Etzel

Assuming you get the json data somehow and store it in a variable
called jdata, you can construct the next page url thus:

var next_page_url = http://search.twitter.com/; + jdata.next_page;

-Chad

On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik basha.neteli...@gmail.com wrote:
 I am using json

 Regards,

 Mahaboob Basha Shaik
 www.netelixir.com
 Making Search Work


 On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel jazzyc...@gmail.com wrote:

 Are you using the .atom or .json API feed?  I am only familiar with
 the .json feed.
 -Chad

 On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik basha.neteli...@gmail.com
 wrote:
  Hi Chad,
 
  how can we use next_page in the url we request. where can we get the
  url
  we need to pass.
 
  Regards,
 
  Mahaboob Basha Shaik
  www.netelixir.com
  Making Search Work
 
 
  On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel jazzyc...@gmail.com wrote:
 
  I'm not sure of these next_url and prev_url fields (never seen
  them anywhere), but at least in the json data there is a next_page
  field which uses ?page=_max_id=__ already prefilled for you.
  This should definitely avoid the duplicate tweet issue.  I've never
  had to do any client-side duplicate filtering when using the correct
  combination of page,max_id, and rpp values...
 
  If you give very specific examples (the actual URL data would be
  handy) where you are seeing duplicates between pages, we can probably
  help sort this out.
 
  -Chad
 
  On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams d...@twitter.com wrote:
  
   The use of prev_url and next_url will take care of step 1 from your
   flow described above. Specifically, next_url will give your
   application the URI to contact to get the next page of results.
  
   Combining max_id and next_url usage will not solve the duplicate
   problem. To overcome that issue, you will have to simply strip the
   duplicate tweets on the client-side.
  
   Thanks,
   Doug Williams
   Twitter API Support
   http://twitter.com/dougw
  
  
  
   On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
   basha.neteli...@gmail.com
   wrote:
   HI,
  
   Can you give me an example how i can use prev_url and next_url with
   max_id.
  
  
  
   No I am following below process to search
   1. Set rpp=100 and retrieve 15 pages search results by incrementing
   the param 'page'
   2. Get the id of the last status on page 15 and set that as the
   max_id
   for the next query
   3. If we have more results, go to step 1
  
   here i got duplicate. 100th record in page 1 was same as 1st record
   in
   page
   2.
  
   I understood the reason why i got the duplicates from matts previous
   mail.
  
   Will this problem solve if i use max_id with prev_url and next_url?
    How can the duplicate problem be solved
  
  
   Regards,
  
   Mahaboob Basha Shaik
   www.netelixir.com
   Making Search Work
  
  
   On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams d...@twitter.com
   wrote:
  
   Basha,
   Pagination is defined well here [1].
  
   The next_url and prev_url fields give your client HTTP URIs to move
   forward and backward through the result set. You can use them to
   page
   through search results.
  
   I have some work to do on the search docs and I'll add field
   definitions then as well.
  
   1. http://en.wikipedia.org/wiki/Pagination_(web)
  
   Doug Williams
   Twitter API Support
   http://twitter.com/dougw
  
  
  
   On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
   basha.neteli...@gmail.com
   wrote:
Hi matt,
   
Thank You
What is Pagination? Does it mean that I cannot use max_id for
searching
tweets. What does next_url and prev_url fields mean. I did not
find
next_url
and prev_url in documentation. how can these two urls be used
with
max_id.
Please explain with example if possible.
   
   
   
Regards,
   
Mahaboob Basha Shaik
www.netelixir.com
Making Search Work
   
   
On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford m...@twitter.com
wrote:
   
Hi Basha,
    The max_id is only intended to be used for pagination via
the
next_url
and prev_url fields and is known not to work with since_id. It
is
not
documented as a valid parameter because it's known to only work
in
the
case
it was designed for. We added the max_id to prevent the problem
where
you
click on 'Next' and page two starts with duplicates. Here's the
scenario:
 1. Let's say you search for 'foo'.
 2. You wait 10 seconds, during which 5 people send tweets
containing
'foo'.
 3. You click next and go to page=2 (or call page=2 via the API)
   3.a. If we displayed results 21-40 the first 5 results would
look
like
duplicates because they were pushed down by the 5 new entries.
   3.b. If we append a max_id from the time you searched we can
do
and
offset from the maximum and the new 5 entries are skipped.
  We use option 3.b. (as does twitter.com now) so you don't 

[twitter-dev] Re: Search queries not working

2009-04-04 Thread Basha Shaik
Hi Doug,
you said we can use next_url and prev URL.

I tried to get next_url. the response is saying that there is no field
called next_url. Should i pass next _url in the request with max_id? if so
how can i know what next_url is?

Can u give an clear example how to use prev_url and next_url

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Fri, Apr 3, 2009 at 6:57 PM, Doug Williams d...@twitter.com wrote:


 The use of prev_url and next_url will take care of step 1 from your
 flow described above. Specifically, next_url will give your
 application the URI to contact to get the next page of results.

 Combining max_id and next_url usage will not solve the duplicate
 problem. To overcome that issue, you will have to simply strip the
 duplicate tweets on the client-side.

 Thanks,
 Doug Williams
 Twitter API Support
 http://twitter.com/dougw



 On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik basha.neteli...@gmail.com
 wrote:
  HI,
 
  Can you give me an example how i can use prev_url and next_url with
 max_id.
 
 
 
  No I am following below process to search
  1. Set rpp=100 and retrieve 15 pages search results by incrementing
  the param 'page'
  2. Get the id of the last status on page 15 and set that as the max_id
  for the next query
  3. If we have more results, go to step 1
 
  here i got duplicate. 100th record in page 1 was same as 1st record in
 page
  2.
 
  I understood the reason why i got the duplicates from matts previous
 mail.
 
  Will this problem solve if i use max_id with prev_url and next_url?
   How can the duplicate problem be solved
 
 
  Regards,
 
  Mahaboob Basha Shaik
  www.netelixir.com
  Making Search Work
 
 
  On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams d...@twitter.com wrote:
 
  Basha,
  Pagination is defined well here [1].
 
  The next_url and prev_url fields give your client HTTP URIs to move
  forward and backward through the result set. You can use them to page
  through search results.
 
  I have some work to do on the search docs and I'll add field
  definitions then as well.
 
  1. 
  http://en.wikipedia.org/wiki/Pagination_(web)http://en.wikipedia.org/wiki/Pagination_%28web%29
 
  Doug Williams
  Twitter API Support
  http://twitter.com/dougw
 
 
 
  On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik basha.neteli...@gmail.com
 
  wrote:
   Hi matt,
  
   Thank You
   What is Pagination? Does it mean that I cannot use max_id for
 searching
   tweets. What does next_url and prev_url fields mean. I did not find
   next_url
   and prev_url in documentation. how can these two urls be used with
   max_id.
   Please explain with example if possible.
  
  
  
   Regards,
  
   Mahaboob Basha Shaik
   www.netelixir.com
   Making Search Work
  
  
   On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford m...@twitter.com
 wrote:
  
   Hi Basha,
   The max_id is only intended to be used for pagination via the
   next_url
   and prev_url fields and is known not to work with since_id. It is not
   documented as a valid parameter because it's known to only work in
 the
   case
   it was designed for. We added the max_id to prevent the problem where
   you
   click on 'Next' and page two starts with duplicates. Here's the
   scenario:
1. Let's say you search for 'foo'.
2. You wait 10 seconds, during which 5 people send tweets containing
   'foo'.
3. You click next and go to page=2 (or call page=2 via the API)
  3.a. If we displayed results 21-40 the first 5 results would look
   like
   duplicates because they were pushed down by the 5 new entries.
  3.b. If we append a max_id from the time you searched we can do
 and
   offset from the maximum and the new 5 entries are skipped.
 We use option 3.b. (as does twitter.com now) so you don't see
   duplicates. Since we wanted to provide the same data in the API as
 the
   UI we
   added the next_url and prev_url members in our output.
   Thanks;
 — Matt Sanford
   On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
  
   HI Matt,
  
   when Since_id and Max_id are given together, max_id is not working.
   This
   query is ignoring max_id. But with only since _id its working fine.
 Is
   there
   any problem when max_id and since_id are used together.
  
   Also please tell me what does max_id exactly mean and also what does
 it
   return when we send a request.
   Also tell me what the total returns.
  
  
   Regards,
  
   Mahaboob Basha Shaik
   www.netelixir.com
   Making Search Work
  
  
   On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford m...@twitter.com
 wrote:
  
   Hi there,
  
  Can you provide an example URL where since_id isn't working so I
   can
   try and reproduce the issue? As for language, the language
 identifier
   is not
   a 100% and sometimes makes mistakes. Hopefully not too many mistakes
   but it
   definitely does.
  
   Thanks;
— Matt Sanford / @mzsanford
  
   On Mar 31, 2009, at 08:14 AM, codepuke wrote:
  
  
   Hi all;
  
   I see a few people complaining about 

[twitter-dev] Re: Search queries not working

2009-04-04 Thread Basha Shaik
Hi Chad,
how can we store all json data in a variable jdata.
Can you tell me how to do that?
I am using java for jason processing

Which technology are you using?
Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 6:23 AM, Chad Etzel jazzyc...@gmail.com wrote:


 Sorry, typo previously:

 var next_page_url = http://search.twitter.com/search.json; +
 jdata.next_page;

 On Sat, Apr 4, 2009 at 2:18 AM, Chad Etzel jazzyc...@gmail.com wrote:
  Assuming you get the json data somehow and store it in a variable
  called jdata, you can construct the next page url thus:
 
  var next_page_url = http://search.twitter.com/; + jdata.next_page;
 
  -Chad
 
  On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik basha.neteli...@gmail.com
 wrote:
  I am using json
 
  Regards,
 
  Mahaboob Basha Shaik
  www.netelixir.com
  Making Search Work
 
 
  On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel jazzyc...@gmail.com wrote:
 
  Are you using the .atom or .json API feed?  I am only familiar with
  the .json feed.
  -Chad
 
  On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik basha.neteli...@gmail.com
 
  wrote:
   Hi Chad,
  
   how can we use next_page in the url we request. where can we get
 the
   url
   we need to pass.
  
   Regards,
  
   Mahaboob Basha Shaik
   www.netelixir.com
   Making Search Work
  
  
   On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel jazzyc...@gmail.com
 wrote:
  
   I'm not sure of these next_url and prev_url fields (never seen
   them anywhere), but at least in the json data there is a next_page
   field which uses ?page=_max_id=__ already prefilled for you.
   This should definitely avoid the duplicate tweet issue.  I've never
   had to do any client-side duplicate filtering when using the correct
   combination of page,max_id, and rpp values...
  
   If you give very specific examples (the actual URL data would be
   handy) where you are seeing duplicates between pages, we can
 probably
   help sort this out.
  
   -Chad
  
   On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams d...@twitter.com
 wrote:
   
The use of prev_url and next_url will take care of step 1 from
 your
flow described above. Specifically, next_url will give your
application the URI to contact to get the next page of results.
   
Combining max_id and next_url usage will not solve the duplicate
problem. To overcome that issue, you will have to simply strip the
duplicate tweets on the client-side.
   
Thanks,
Doug Williams
Twitter API Support
http://twitter.com/dougw
   
   
   
On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
basha.neteli...@gmail.com
wrote:
HI,
   
Can you give me an example how i can use prev_url and next_url
 with
max_id.
   
   
   
No I am following below process to search
1. Set rpp=100 and retrieve 15 pages search results by
 incrementing
the param 'page'
2. Get the id of the last status on page 15 and set that as the
max_id
for the next query
3. If we have more results, go to step 1
   
here i got duplicate. 100th record in page 1 was same as 1st
 record
in
page
2.
   
I understood the reason why i got the duplicates from matts
 previous
mail.
   
Will this problem solve if i use max_id with prev_url and
 next_url?
 How can the duplicate problem be solved
   
   
Regards,
   
Mahaboob Basha Shaik
www.netelixir.com
Making Search Work
   
   
On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams d...@twitter.com
wrote:
   
Basha,
Pagination is defined well here [1].
   
The next_url and prev_url fields give your client HTTP URIs to
 move
forward and backward through the result set. You can use them to
page
through search results.
   
I have some work to do on the search docs and I'll add field
definitions then as well.
   
1. 
http://en.wikipedia.org/wiki/Pagination_(web)http://en.wikipedia.org/wiki/Pagination_%28web%29
   
Doug Williams
Twitter API Support
http://twitter.com/dougw
   
   
   
On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
basha.neteli...@gmail.com
wrote:
 Hi matt,

 Thank You
 What is Pagination? Does it mean that I cannot use max_id for
 searching
 tweets. What does next_url and prev_url fields mean. I did not
 find
 next_url
 and prev_url in documentation. how can these two urls be used
 with
 max_id.
 Please explain with example if possible.



 Regards,

 Mahaboob Basha Shaik
 www.netelixir.com
 Making Search Work


 On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
 m...@twitter.com
 wrote:

 Hi Basha,
 The max_id is only intended to be used for pagination via
 the
 next_url
 and prev_url fields and is known not to work with since_id.
 It
 is
 not
 documented as a valid parameter because it's known to only
 work
 in
 the
 case

[twitter-dev] Re: Search queries not working

2009-04-04 Thread Chad Etzel

I have not used java in a long time, but there should be a next_page
key in the map you create from the json response.  Here is an example
json response with rpp=1 for hello:

{results:[{text:hello,to_user_id:null,from_user:fsas1975,id:1450457219,from_user_id:6788389,source:lt;a
href=quot;http:\/\/twitter.com\/quot;gt;weblt;\/agt;,profile_image_url:http:\/\/s3.amazonaws.com\/twitter_production\/profile_images\/117699880\/514HjlKzd1L__AA280__normal.jpg,created_at:Sat,
04 Apr 2009 06:59:57
+}],since_id:0,max_id:1450457219,refresh_url:?since_id=1450457219q=hello,results_per_page:1,next_page:?page=2max_id=1450457219rpp=1q=hello,completed_in:0.013591,page:1,query:hello}

The part you are interested in is this:
next_page:?page=2max_id=1450457219rpp=1q=hello

you can construct the next page url by appending this value to:
http://search.twitter.com/search.json;

-Chad


On Sat, Apr 4, 2009 at 2:55 AM, Basha Shaik basha.neteli...@gmail.com wrote:
 Hi i am using java. We parse the json response. and store the value as key -
 value pairs in a Map.

 In the reponse no wahere i found next_url or next_page.
 Can you tell me how we can store all json data in a variable.

 Regards,

 Mahaboob Basha Shaik
 www.netelixir.com
 Making Search Work



[twitter-dev] Re: Search queries not working

2009-04-04 Thread Basha Shaik
Hi chad,

Thank you. I was trying for a query which has only 55 tweets and i have kept
100 as rpp . so i was not getting next_page. when i decreased rpp to 20 and
tried i got now. thank you very much. i Will check if any Duplicates occur
with these and let you know.

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 7:06 AM, Chad Etzel jazzyc...@gmail.com wrote:

 next_page



[twitter-dev] Re: Search queries not working

2009-04-03 Thread Basha Shaik
HI,

Can you give me an example how i can use prev_url and next_url with max_id.



No I am following below process to search
1. Set rpp=100 and retrieve 15 pages search results by incrementing
the param 'page'
2. Get the id of the last status on page 15 and set that as the max_id
for the next query
3. If we have more results, go to step 1

here i got duplicate. 100th record in page 1 was same as 1st record in page
2.

I understood the reason why i got the duplicates from matts previous mail.

Will this problem solve if i use max_id with prev_url and next_url?
 How can the duplicate problem be solved


Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams d...@twitter.com wrote:


 Basha,
 Pagination is defined well here [1].

 The next_url and prev_url fields give your client HTTP URIs to move
 forward and backward through the result set. You can use them to page
 through search results.

 I have some work to do on the search docs and I'll add field
 definitions then as well.

 1. 
 http://en.wikipedia.org/wiki/Pagination_(web)http://en.wikipedia.org/wiki/Pagination_%28web%29

 Doug Williams
 Twitter API Support
 http://twitter.com/dougw



 On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik basha.neteli...@gmail.com
 wrote:
  Hi matt,
 
  Thank You
  What is Pagination? Does it mean that I cannot use max_id for searching
  tweets. What does next_url and prev_url fields mean. I did not find
 next_url
  and prev_url in documentation. how can these two urls be used with
 max_id.
  Please explain with example if possible.
 
 
 
  Regards,
 
  Mahaboob Basha Shaik
  www.netelixir.com
  Making Search Work
 
 
  On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford m...@twitter.com wrote:
 
  Hi Basha,
  The max_id is only intended to be used for pagination via the
 next_url
  and prev_url fields and is known not to work with since_id. It is not
  documented as a valid parameter because it's known to only work in the
 case
  it was designed for. We added the max_id to prevent the problem where
 you
  click on 'Next' and page two starts with duplicates. Here's the
 scenario:
   1. Let's say you search for 'foo'.
   2. You wait 10 seconds, during which 5 people send tweets containing
  'foo'.
   3. You click next and go to page=2 (or call page=2 via the API)
 3.a. If we displayed results 21-40 the first 5 results would look
 like
  duplicates because they were pushed down by the 5 new entries.
 3.b. If we append a max_id from the time you searched we can do and
  offset from the maximum and the new 5 entries are skipped.
We use option 3.b. (as does twitter.com now) so you don't see
  duplicates. Since we wanted to provide the same data in the API as the
 UI we
  added the next_url and prev_url members in our output.
  Thanks;
— Matt Sanford
  On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
 
  HI Matt,
 
  when Since_id and Max_id are given together, max_id is not working. This
  query is ignoring max_id. But with only since _id its working fine. Is
 there
  any problem when max_id and since_id are used together.
 
  Also please tell me what does max_id exactly mean and also what does it
  return when we send a request.
  Also tell me what the total returns.
 
 
  Regards,
 
  Mahaboob Basha Shaik
  www.netelixir.com
  Making Search Work
 
 
  On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford m...@twitter.com wrote:
 
  Hi there,
 
 Can you provide an example URL where since_id isn't working so I can
  try and reproduce the issue? As for language, the language identifier
 is not
  a 100% and sometimes makes mistakes. Hopefully not too many mistakes
 but it
  definitely does.
 
  Thanks;
   — Matt Sanford / @mzsanford
 
  On Mar 31, 2009, at 08:14 AM, codepuke wrote:
 
 
  Hi all;
 
  I see a few people complaining about the since_id not working.  I too
  have the same issue - I am currently storing the last executed id and
  having to check new tweets to make sure their id is greater than my
  last processed id as a temporary workaround.
 
  I have also noticed that the filter by language param also doesn't
  seem to be working 100% - I notice a few chinese tweets, as well as
  tweets having a null value for language...
 
 
 
 
 
 



[twitter-dev] Re: Search queries not working

2009-04-03 Thread Doug Williams

The use of prev_url and next_url will take care of step 1 from your
flow described above. Specifically, next_url will give your
application the URI to contact to get the next page of results.

Combining max_id and next_url usage will not solve the duplicate
problem. To overcome that issue, you will have to simply strip the
duplicate tweets on the client-side.

Thanks,
Doug Williams
Twitter API Support
http://twitter.com/dougw



On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik basha.neteli...@gmail.com wrote:
 HI,

 Can you give me an example how i can use prev_url and next_url with max_id.



 No I am following below process to search
 1. Set rpp=100 and retrieve 15 pages search results by incrementing
 the param 'page'
 2. Get the id of the last status on page 15 and set that as the max_id
 for the next query
 3. If we have more results, go to step 1

 here i got duplicate. 100th record in page 1 was same as 1st record in page
 2.

 I understood the reason why i got the duplicates from matts previous mail.

 Will this problem solve if i use max_id with prev_url and next_url?
  How can the duplicate problem be solved


 Regards,

 Mahaboob Basha Shaik
 www.netelixir.com
 Making Search Work


 On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams d...@twitter.com wrote:

 Basha,
 Pagination is defined well here [1].

 The next_url and prev_url fields give your client HTTP URIs to move
 forward and backward through the result set. You can use them to page
 through search results.

 I have some work to do on the search docs and I'll add field
 definitions then as well.

 1. http://en.wikipedia.org/wiki/Pagination_(web)

 Doug Williams
 Twitter API Support
 http://twitter.com/dougw



 On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik basha.neteli...@gmail.com
 wrote:
  Hi matt,
 
  Thank You
  What is Pagination? Does it mean that I cannot use max_id for searching
  tweets. What does next_url and prev_url fields mean. I did not find
  next_url
  and prev_url in documentation. how can these two urls be used with
  max_id.
  Please explain with example if possible.
 
 
 
  Regards,
 
  Mahaboob Basha Shaik
  www.netelixir.com
  Making Search Work
 
 
  On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford m...@twitter.com wrote:
 
  Hi Basha,
      The max_id is only intended to be used for pagination via the
  next_url
  and prev_url fields and is known not to work with since_id. It is not
  documented as a valid parameter because it's known to only work in the
  case
  it was designed for. We added the max_id to prevent the problem where
  you
  click on 'Next' and page two starts with duplicates. Here's the
  scenario:
   1. Let's say you search for 'foo'.
   2. You wait 10 seconds, during which 5 people send tweets containing
  'foo'.
   3. You click next and go to page=2 (or call page=2 via the API)
     3.a. If we displayed results 21-40 the first 5 results would look
  like
  duplicates because they were pushed down by the 5 new entries.
     3.b. If we append a max_id from the time you searched we can do and
  offset from the maximum and the new 5 entries are skipped.
    We use option 3.b. (as does twitter.com now) so you don't see
  duplicates. Since we wanted to provide the same data in the API as the
  UI we
  added the next_url and prev_url members in our output.
  Thanks;
    — Matt Sanford
  On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
 
  HI Matt,
 
  when Since_id and Max_id are given together, max_id is not working.
  This
  query is ignoring max_id. But with only since _id its working fine. Is
  there
  any problem when max_id and since_id are used together.
 
  Also please tell me what does max_id exactly mean and also what does it
  return when we send a request.
  Also tell me what the total returns.
 
 
  Regards,
 
  Mahaboob Basha Shaik
  www.netelixir.com
  Making Search Work
 
 
  On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford m...@twitter.com wrote:
 
  Hi there,
 
     Can you provide an example URL where since_id isn't working so I
  can
  try and reproduce the issue? As for language, the language identifier
  is not
  a 100% and sometimes makes mistakes. Hopefully not too many mistakes
  but it
  definitely does.
 
  Thanks;
   — Matt Sanford / @mzsanford
 
  On Mar 31, 2009, at 08:14 AM, codepuke wrote:
 
 
  Hi all;
 
  I see a few people complaining about the since_id not working.  I too
  have the same issue - I am currently storing the last executed id and
  having to check new tweets to make sure their id is greater than my
  last processed id as a temporary workaround.
 
  I have also noticed that the filter by language param also doesn't
  seem to be working 100% - I notice a few chinese tweets, as well as
  tweets having a null value for language...
 
 
 
 
 
 




[twitter-dev] Re: Search queries not working

2009-04-03 Thread Chad Etzel

I'm not sure of these next_url and prev_url fields (never seen
them anywhere), but at least in the json data there is a next_page
field which uses ?page=_max_id=__ already prefilled for you.
This should definitely avoid the duplicate tweet issue.  I've never
had to do any client-side duplicate filtering when using the correct
combination of page,max_id, and rpp values...

If you give very specific examples (the actual URL data would be
handy) where you are seeing duplicates between pages, we can probably
help sort this out.

-Chad

On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams d...@twitter.com wrote:

 The use of prev_url and next_url will take care of step 1 from your
 flow described above. Specifically, next_url will give your
 application the URI to contact to get the next page of results.

 Combining max_id and next_url usage will not solve the duplicate
 problem. To overcome that issue, you will have to simply strip the
 duplicate tweets on the client-side.

 Thanks,
 Doug Williams
 Twitter API Support
 http://twitter.com/dougw



 On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik basha.neteli...@gmail.com 
 wrote:
 HI,

 Can you give me an example how i can use prev_url and next_url with max_id.



 No I am following below process to search
 1. Set rpp=100 and retrieve 15 pages search results by incrementing
 the param 'page'
 2. Get the id of the last status on page 15 and set that as the max_id
 for the next query
 3. If we have more results, go to step 1

 here i got duplicate. 100th record in page 1 was same as 1st record in page
 2.

 I understood the reason why i got the duplicates from matts previous mail.

 Will this problem solve if i use max_id with prev_url and next_url?
  How can the duplicate problem be solved


 Regards,

 Mahaboob Basha Shaik
 www.netelixir.com
 Making Search Work


 On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams d...@twitter.com wrote:

 Basha,
 Pagination is defined well here [1].

 The next_url and prev_url fields give your client HTTP URIs to move
 forward and backward through the result set. You can use them to page
 through search results.

 I have some work to do on the search docs and I'll add field
 definitions then as well.

 1. http://en.wikipedia.org/wiki/Pagination_(web)

 Doug Williams
 Twitter API Support
 http://twitter.com/dougw



 On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik basha.neteli...@gmail.com
 wrote:
  Hi matt,
 
  Thank You
  What is Pagination? Does it mean that I cannot use max_id for searching
  tweets. What does next_url and prev_url fields mean. I did not find
  next_url
  and prev_url in documentation. how can these two urls be used with
  max_id.
  Please explain with example if possible.
 
 
 
  Regards,
 
  Mahaboob Basha Shaik
  www.netelixir.com
  Making Search Work
 
 
  On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford m...@twitter.com wrote:
 
  Hi Basha,
  The max_id is only intended to be used for pagination via the
  next_url
  and prev_url fields and is known not to work with since_id. It is not
  documented as a valid parameter because it's known to only work in the
  case
  it was designed for. We added the max_id to prevent the problem where
  you
  click on 'Next' and page two starts with duplicates. Here's the
  scenario:
   1. Let's say you search for 'foo'.
   2. You wait 10 seconds, during which 5 people send tweets containing
  'foo'.
   3. You click next and go to page=2 (or call page=2 via the API)
 3.a. If we displayed results 21-40 the first 5 results would look
  like
  duplicates because they were pushed down by the 5 new entries.
 3.b. If we append a max_id from the time you searched we can do and
  offset from the maximum and the new 5 entries are skipped.
We use option 3.b. (as does twitter.com now) so you don't see
  duplicates. Since we wanted to provide the same data in the API as the
  UI we
  added the next_url and prev_url members in our output.
  Thanks;
— Matt Sanford
  On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
 
  HI Matt,
 
  when Since_id and Max_id are given together, max_id is not working.
  This
  query is ignoring max_id. But with only since _id its working fine. Is
  there
  any problem when max_id and since_id are used together.
 
  Also please tell me what does max_id exactly mean and also what does it
  return when we send a request.
  Also tell me what the total returns.
 
 
  Regards,
 
  Mahaboob Basha Shaik
  www.netelixir.com
  Making Search Work
 
 
  On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford m...@twitter.com wrote:
 
  Hi there,
 
 Can you provide an example URL where since_id isn't working so I
  can
  try and reproduce the issue? As for language, the language identifier
  is not
  a 100% and sometimes makes mistakes. Hopefully not too many mistakes
  but it
  definitely does.
 
  Thanks;
   — Matt Sanford / @mzsanford
 
  On Mar 31, 2009, at 08:14 AM, codepuke wrote:
 
 
  Hi all;
 
  I see a few people complaining about the since_id not working.  I 

[twitter-dev] Re: search fine time interval

2009-04-03 Thread Cestino

Many thanks Doug,

I tried client side filtering but run into the 1500 tweet limit so I
cannot get to tweets in the middle of the day. Is there an alternative
solution? Thanks for your patients, I'm new to APIs.

Cestino

On Apr 1, 3:25 pm, Doug Williams d...@twitter.com wrote:
 Cestino,
 Search only allows dates to be specified down to the day. We don't allow the
 granularity to be more specific than that. If you are only looking for a
 specific hour, our current recommendation is to do client-side filtering.

 Thanks,
 Doug Williams
 Twitter API Supporthttp://twitter.com/dougw

 On Wed, Apr 1, 2009 at 1:58 PM, Cestino paulstantonea...@gmail.com wrote:

  Hi All,

  Is it possible to search a finer time interval that a day? For example
  search between 12:00 and 1:00 on a specific day. I have tried numerous
  formats to extend the since and until operators to include
  hour:minute:second with no luck.

  Many thanks,
  Cestino


[twitter-dev] Re: search fine time interval

2009-04-03 Thread Doug Williams

There is a technique to work around the 1500 tweet paging limit but we
don't officially support it so I'd rather not link you directly. It is
available through a search of this group's archives.

Regards,
Doug Williams
Twitter API Support
http://twitter.com/dougw



On Fri, Apr 3, 2009 at 12:27 PM, Cestino paulstantonea...@gmail.com wrote:

 Many thanks Doug,

 I tried client side filtering but run into the 1500 tweet limit so I
 cannot get to tweets in the middle of the day. Is there an alternative
 solution? Thanks for your patients, I'm new to APIs.

 Cestino

 On Apr 1, 3:25 pm, Doug Williams d...@twitter.com wrote:
 Cestino,
 Search only allows dates to be specified down to the day. We don't allow the
 granularity to be more specific than that. If you are only looking for a
 specific hour, our current recommendation is to do client-side filtering.

 Thanks,
 Doug Williams
 Twitter API Supporthttp://twitter.com/dougw

 On Wed, Apr 1, 2009 at 1:58 PM, Cestino paulstantonea...@gmail.com wrote:

  Hi All,

  Is it possible to search a finer time interval that a day? For example
  search between 12:00 and 1:00 on a specific day. I have tried numerous
  formats to extend the since and until operators to include
  hour:minute:second with no luck.

  Many thanks,
  Cestino



[twitter-dev] Re: Search queries not working

2009-04-02 Thread feedbackmine

Hi Matt,

I have tried to use language parameter of twitter search and find the
result is very unreliable. For example:
http://search.twitter.com/search?lang=allq=tweetjobsearch returns 10
results (all in english), but
http://search.twitter.com/search?lang=enq=tweetjobsearch only returns
3.

I googled this list and it seems you are using n-gram based algorithm
(http://groups.google.com/group/twitter-development-talk/msg/
565313d7b36e8d65). I have found n-gram algorithm works very well for
language detection, but the quality of training data may make a big
difference.

Recently I have developed a language detector (in ruby) myself:
http://github.com/feedbackmine/language_detector/tree/master
It uses wikipedia's data for training, and based on my limited
experience it works well. Actually using wikipedia's data is not my
idea, all credits should go to Kevin Burton (http://feedblog.org/
2005/08/19/ngram-language-categorization-source/ ).

Just thought you may be interested.

@feedbackmine
http://twitter.com/feedbackmine

On Mar 31, 11:22 am, Matt Sanford m...@twitter.com wrote:
 Hi there,

      Can you provide an example URL where since_id isn't working so I  
 can try and reproduce the issue? As forlanguage, thelanguage 
 identifier is not a 100% and sometimes makes mistakes. Hopefully not  
 too many mistakes but it definitely does.

 Thanks;
    — Matt Sanford / @mzsanford

 On Mar 31, 2009, at 08:14 AM, codepuke wrote:





  Hi all;

  I see a few people complaining about the since_id not working.  I too
  have the same issue - I am currently storing the last executed id and
  having to check new tweets to make sure their id is greater than my
  last processed id as a temporary workaround.

  I have also noticed that the filter bylanguageparam also doesn't
  seem to be working 100% - I notice a few chinese tweets, as well as
  tweets having a null value forlanguage...


[twitter-dev] Re: Search queries not working

2009-04-02 Thread Basha Shaik
Hi matt,

Thank You
What is Pagination? Does it mean that I cannot use max_id for searching
tweets. What does next_url and prev_url fields mean. I did not find next_url
and prev_url in documentation. how can these two urls be used with max_id.
Please explain with example if possible.



Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford m...@twitter.com wrote:

 Hi Basha,
 The max_id is only intended to be used for pagination via the next_url
 and prev_url fields and is known not to work with since_id. It is not
 documented as a valid parameter because it's known to only work in the case
 it was designed for. We added the max_id to prevent the problem where you
 click on 'Next' and page two starts with duplicates. Here's the scenario:

  1. Let's say you search for 'foo'.
  2. You wait 10 seconds, during which 5 people send tweets containing
 'foo'.
  3. You click next and go to page=2 (or call page=2 via the API)
3.a. If we displayed results 21-40 the first 5 results would look like
 duplicates because they were pushed down by the 5 new entries.
3.b. If we append a max_id from the time you searched we can do and
 offset from the maximum and the new 5 entries are skipped.

   We use option 3.b. (as does twitter.com now) so you don't see
 duplicates. Since we wanted to provide the same data in the API as the UI we
 added the next_url and prev_url members in our output.

 Thanks;
   — Matt Sanford

 On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:

 HI Matt,

 when Since_id and Max_id are given together, max_id is not working. This
 query is ignoring max_id. But with only since _id its working fine. Is there
 any problem when max_id and since_id are used together.

 Also please tell me what does max_id exactly mean and also what does it
 return when we send a request.
 Also tell me what the total returns.


 Regards,

 Mahaboob Basha Shaik
 www.netelixir.com
 Making Search Work


 On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford m...@twitter.com wrote:


 Hi there,

Can you provide an example URL where since_id isn't working so I can
 try and reproduce the issue? As for language, the language identifier is not
 a 100% and sometimes makes mistakes. Hopefully not too many mistakes but it
 definitely does.

 Thanks;
  — Matt Sanford / @mzsanford


 On Mar 31, 2009, at 08:14 AM, codepuke wrote:


 Hi all;

 I see a few people complaining about the since_id not working.  I too
 have the same issue - I am currently storing the last executed id and
 having to check new tweets to make sure their id is greater than my
 last processed id as a temporary workaround.

 I have also noticed that the filter by language param also doesn't
 seem to be working 100% - I notice a few chinese tweets, as well as
 tweets having a null value for language...







[twitter-dev] Re: Search queries not working

2009-04-02 Thread Doug Williams

Basha,
Pagination is defined well here [1].

The next_url and prev_url fields give your client HTTP URIs to move
forward and backward through the result set. You can use them to page
through search results.

I have some work to do on the search docs and I'll add field
definitions then as well.

1. http://en.wikipedia.org/wiki/Pagination_(web)

Doug Williams
Twitter API Support
http://twitter.com/dougw



On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik basha.neteli...@gmail.com wrote:
 Hi matt,

 Thank You
 What is Pagination? Does it mean that I cannot use max_id for searching
 tweets. What does next_url and prev_url fields mean. I did not find next_url
 and prev_url in documentation. how can these two urls be used with max_id.
 Please explain with example if possible.



 Regards,

 Mahaboob Basha Shaik
 www.netelixir.com
 Making Search Work


 On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford m...@twitter.com wrote:

 Hi Basha,
     The max_id is only intended to be used for pagination via the next_url
 and prev_url fields and is known not to work with since_id. It is not
 documented as a valid parameter because it's known to only work in the case
 it was designed for. We added the max_id to prevent the problem where you
 click on 'Next' and page two starts with duplicates. Here's the scenario:
  1. Let's say you search for 'foo'.
  2. You wait 10 seconds, during which 5 people send tweets containing
 'foo'.
  3. You click next and go to page=2 (or call page=2 via the API)
    3.a. If we displayed results 21-40 the first 5 results would look like
 duplicates because they were pushed down by the 5 new entries.
    3.b. If we append a max_id from the time you searched we can do and
 offset from the maximum and the new 5 entries are skipped.
   We use option 3.b. (as does twitter.com now) so you don't see
 duplicates. Since we wanted to provide the same data in the API as the UI we
 added the next_url and prev_url members in our output.
 Thanks;
   — Matt Sanford
 On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:

 HI Matt,

 when Since_id and Max_id are given together, max_id is not working. This
 query is ignoring max_id. But with only since _id its working fine. Is there
 any problem when max_id and since_id are used together.

 Also please tell me what does max_id exactly mean and also what does it
 return when we send a request.
 Also tell me what the total returns.


 Regards,

 Mahaboob Basha Shaik
 www.netelixir.com
 Making Search Work


 On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford m...@twitter.com wrote:

 Hi there,

    Can you provide an example URL where since_id isn't working so I can
 try and reproduce the issue? As for language, the language identifier is not
 a 100% and sometimes makes mistakes. Hopefully not too many mistakes but it
 definitely does.

 Thanks;
  — Matt Sanford / @mzsanford

 On Mar 31, 2009, at 08:14 AM, codepuke wrote:


 Hi all;

 I see a few people complaining about the since_id not working.  I too
 have the same issue - I am currently storing the last executed id and
 having to check new tweets to make sure their id is greater than my
 last processed id as a temporary workaround.

 I have also noticed that the filter by language param also doesn't
 seem to be working 100% - I notice a few chinese tweets, as well as
 tweets having a null value for language...








[twitter-dev] Re: Search queries not working

2009-04-01 Thread Matt Sanford

Hi Basha,

The max_id is only intended to be used for pagination via the  
next_url and prev_url fields and is known not to work with since_id.  
It is not documented as a valid parameter because it's known to only  
work in the case it was designed for. We added the max_id to prevent  
the problem where you click on 'Next' and page two starts with  
duplicates. Here's the scenario:


 1. Let's say you search for 'foo'.
 2. You wait 10 seconds, during which 5 people send tweets containing  
'foo'.

 3. You click next and go to page=2 (or call page=2 via the API)
   3.a. If we displayed results 21-40 the first 5 results would look  
like duplicates because they were pushed down by the 5 new entries.
   3.b. If we append a max_id from the time you searched we can do  
and offset from the maximum and the new 5 entries are skipped.


  We use option 3.b. (as does twitter.com now) so you don't see  
duplicates. Since we wanted to provide the same data in the API as the  
UI we added the next_url and prev_url members in our output.


Thanks;
  — Matt Sanford

On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:


HI Matt,

when Since_id and Max_id are given together, max_id is not working.  
This query is ignoring max_id. But with only since _id its working  
fine. Is there any problem when max_id and since_id are used together.


Also please tell me what does max_id exactly mean and also what does  
it return when we send a request.

Also tell me what the total returns.


Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford m...@twitter.com  
wrote:


Hi there,

   Can you provide an example URL where since_id isn't working so I  
can try and reproduce the issue? As for language, the language  
identifier is not a 100% and sometimes makes mistakes. Hopefully not  
too many mistakes but it definitely does.


Thanks;
 — Matt Sanford / @mzsanford


On Mar 31, 2009, at 08:14 AM, codepuke wrote:


Hi all;

I see a few people complaining about the since_id not working.  I too
have the same issue - I am currently storing the last executed id and
having to check new tweets to make sure their id is greater than my
last processed id as a temporary workaround.

I have also noticed that the filter by language param also doesn't
seem to be working 100% - I notice a few chinese tweets, as well as
tweets having a null value for language...







[twitter-dev] Re: search fine time interval

2009-04-01 Thread Doug Williams
Cestino,
Search only allows dates to be specified down to the day. We don't allow the
granularity to be more specific than that. If you are only looking for a
specific hour, our current recommendation is to do client-side filtering.

Thanks,
Doug Williams
Twitter API Support
http://twitter.com/dougw


On Wed, Apr 1, 2009 at 1:58 PM, Cestino paulstantonea...@gmail.com wrote:


 Hi All,

 Is it possible to search a finer time interval that a day? For example
 search between 12:00 and 1:00 on a specific day. I have tried numerous
 formats to extend the since and until operators to include
 hour:minute:second with no luck.

 Many thanks,
 Cestino



[twitter-dev] Re: Search queries not working

2009-03-31 Thread Basha Shaik
HI Matt,

when Since_id and Max_id are given together, max_id is not working. This
query is ignoring max_id. But with only since _id its working fine. Is there
any problem when max_id and since_id are used together.

Also please tell me what does max_id exactly mean and also what does it
return when we send a request.
Also tell me what the total returns.


Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford m...@twitter.com wrote:


 Hi there,

Can you provide an example URL where since_id isn't working so I can try
 and reproduce the issue? As for language, the language identifier is not a
 100% and sometimes makes mistakes. Hopefully not too many mistakes but it
 definitely does.

 Thanks;
  — Matt Sanford / @mzsanford


 On Mar 31, 2009, at 08:14 AM, codepuke wrote:


 Hi all;

 I see a few people complaining about the since_id not working.  I too
 have the same issue - I am currently storing the last executed id and
 having to check new tweets to make sure their id is greater than my
 last processed id as a temporary workaround.

 I have also noticed that the filter by language param also doesn't
 seem to be working 100% - I notice a few chinese tweets, as well as
 tweets having a null value for language...





[twitter-dev] Re: Search API rate limit

2009-03-14 Thread Doug Williams
Hi,
There is a rate limit for the Search API but it is higher than 100 requests
per second imposed by the REST API. The limiting is performed by the IP
address. The default limit is high enough that most applications shouldn't
be affected.

As the search architecture has no notion of accounts, it would be difficult
to add account-based limiting. We do however whitelist IP addresses in the
event that the higher limit is warranted.

Doug Williams
Twitter API Support
http://twitter.com/dougw


On Sat, Mar 14, 2009 at 11:14 AM, benjamin brande...@gmail.com wrote:


 I've noticed that the Search API page has recently been changed to
 say:

 The standard API rate limiting is liberal enough for most use cases.
 If you find yourself encountering these limits, please contact us and
 describe your app's requirements.

 Recently, this page stated that the rate for search is not limited. A
 week ago, Alex Payne stated here that there is not limit on the search
 API use; is this still the case, and the wording on the API Wiki is
 irrelevant? Or has the 100 req/hr limit been imposed on the search API
 as well?

 If the search API has been limited, is there a possibility of adding
 an authenticated search so that developers can apply for rate limit
 increases via their accounts, rather than a static IP that may not be
 feasible for their application?

 Thanks.



[twitter-dev] Re: Search API and Feeds ... using a sinceID ... please?

2009-03-06 Thread Matt Sanford

some information inline …

On Mar 6, 2009, at 01:25 PM, Scott C. Lemon wrote:



I'm working on our site - http://www.TopFollowFriday.com - and am
currently using the search API to search for the #followfriday
hashtag.  All is well, and it's working ... except ...

The search feed only returns the last 15 items.

There is a since_id, but that is useless as it only appears to work
*within* the last 15.  Then there is paging ... but I'm unclear
exactly what good paging does?

If I make a request, and get 15, and then make a request for page
2 ... what exactly does page 2 consist of?  I'm not passing anything
else by the page, and I'm guessing that you don't store server-side
state information ... so page 2 doesn't really mean anything to
me ...

1. It could be that page 2 will somehow be exactly the 16th-30th items
in the list from when I made my first request ... but I somehow doubt
that ...


Actually, both the JSON and atom APIs return an attribute called  
'next_url' which includes the page parameter as well as max_id so it  
works as you would expect.




2. I'm thinking that page 2 will maybe be the 16th-30th items in the
new list that now includes all of the tweets that came in since my
initial query.  Bad.


See above, about max_id.



I'm caught with an API that I'm confused with ... how can I make my
queries in a way that capture all of the tweets ... but not have to
pound the server with requests?

Can you do one of the following?

1. Straighten me out, and explain to me how I'm missing the boat
here.  I'm wishing that I was missing something, but it just seems
this is how the API works.

2. Provide *more* than just 15 items ... 25 ... 50 ... or let me
specify up to some maximum?  I swear that I can write some good rate-
limiting code that would automatically adjust the numbers to try and
keep it optimal.


you can do this, check out the rpp parameter at 
http://apiwiki.twitter.com/Search-API-Documentation



3. Implement the since_id so that it actually worked properly - not
capped by 15 items - so that I could call the API at some reasonable
rate and pass along some since_id and get all of the tweets since that
tweet.  It would even be ok to put a max size on that also ...



it works as expect when you paginate. We can't support a call that  
returns millions of entries (since_id=0) so the max_id/page is the  
correct way to handle this.



4. Tell me about the items in the bug list that I need to vote for to
make this happen ASAP.  :-)

You guys are awesome ... this has been a fun project, and the first
friday was a great success ... I want to clean up my code though, and
see if I can get the data that I want while being respectful of the
API and rate limits ...  :-)

@Humancell




[twitter-dev] Re: Search results issue

2009-03-03 Thread Matt Sanford

Hi Chris,

I just checked your example and it looks like the third entry (http://twitter.com/dailythomas/statuses/1266693521 
) has what you expect. Perhaps the issue is that most people using  
#food are also using #recipe. For hashtags we index both the #term and  
term alone so people searching for 'recipe' will also find '#recipe'  
if they don't know about hashtags.


— Matt

On Mar 3, 2009, at 01:35 PM, Chris wrote:



Hi guys,

When I search for recipe #food - I was expecting to see tweets that
are tagged with #food, and also contain the text 'recipe'.

But it seems as though it is picking up results that are tagged with
#food AND tagged with #recipe - is this expected behavior? (ref:
http://search.twitter.com/search?q=recipe+%23food)

Cheers,


Chris Rickard.




[twitter-dev] Re: Search API Source attribute

2009-02-25 Thread Matt Sanford

Hi Chad,

This anchor is how search gets the data from Twitter so we keep  
it consistent and pass it along that way. For the next version of the  
API we have an outstanding request to break these two apart (See http://code.google.com/p/twitter-api/issues/detail?id=75) 
.


Thanks;
  — Matt

On Feb 25, 2009, at 08:29 AM, Chad Etzel wrote:



Hey guys,

Thanks for adding the source attribute in the Search API results.   
One question:


Instead of returning something like
lt;a href=http://www.tweetdeck.com/gt;TweetDecklt;/agt;

I think it would be nicer (for the app devs) if it were returned in
two parts, say source_name and source_link

In either case, the app has to do some work to use the source
attribute, and/or reconstruct the hyperlink.

In the current state, the app has to convert the escaped html entities
before it is usable as a link, or if it just wants the source name, it
has to do some sort of RegEx matching to extract it from the data
(same with the web address).  Both of these I consider to be
expensive operations.

I have no idea how this data is stored internally on Twitter's end, so
it may have been a decision made in the name of performance to return
the data the way it is now.

If the data is split into two attributes, getting the source name is
trivial, as is the web address, and the link can be reconstructed with
concatenation instead of using any string find/replace functions.


Would anyone else find this convenient?  I am happy to open an issue
and let people star it if so.

-Chad




[twitter-dev] Re: Search API Source attribute

2009-02-25 Thread Chad Etzel

Thanks for the info.

Anchor - that's the word I was looking for... could have searched
for that if I had remembered anchor.

Sorry for wasting your time, you may now shoot me /monty-python

-Chad

On Wed, Feb 25, 2009 at 11:44 AM, Matt Sanford m...@twitter.com wrote:
 Hi Chad,
     This anchor is how search gets the data from Twitter so we keep it
 consistent and pass it along that way. For the next version of the API we
 have an outstanding request to break these two apart
 (See http://code.google.com/p/twitter-api/issues/detail?id=75).
 Thanks;
   — Matt
 On Feb 25, 2009, at 08:29 AM, Chad Etzel wrote:

 Hey guys,

 Thanks for adding the source attribute in the Search API results.  One
 question:

 Instead of returning something like
 lt;a href=http://www.tweetdeck.com/gt;TweetDecklt;/agt;

 I think it would be nicer (for the app devs) if it were returned in
 two parts, say source_name and source_link

 In either case, the app has to do some work to use the source
 attribute, and/or reconstruct the hyperlink.

 In the current state, the app has to convert the escaped html entities
 before it is usable as a link, or if it just wants the source name, it
 has to do some sort of RegEx matching to extract it from the data
 (same with the web address).  Both of these I consider to be
 expensive operations.

 I have no idea how this data is stored internally on Twitter's end, so
 it may have been a decision made in the name of performance to return
 the data the way it is now.

 If the data is split into two attributes, getting the source name is
 trivial, as is the web address, and the link can be reconstructed with
 concatenation instead of using any string find/replace functions.


 Would anyone else find this convenient?  I am happy to open an issue
 and let people star it if so.

 -Chad




<    1   2   3   4