If you do something like:
from:louisvillemojo OR louisville OR kentucky
it will do what you describe, but if you want to do something like
from:louisvillemojo OR (louisville AND kentucky)
then it will not work. You would have to do 2 separate queries in that case.
-Chad
On Thu, Oct 29, 2009
A number of people are seeing similar things, especially if you
specify a since_id:
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/e6289b6439c1d26d/e367ca8af09d28d5?lnk=gstq=searches+returning+no+tweetspli=1
My current (extremely bad) solution is to just hire hose
It looks as though it depends on the exact nature of the query.
The following always return up to date results, even with a since_id
(I haven't included those since_ids here)
http://search.twitter.com/search.json?q=hong+kong+OR+kowloonrpp=100
Actually I can confirm my previous supposition, here is the log for an
empty 200 response with a new max_id:
DEBUG: 06:02:44 PM on Mon October 26th Doing CURL fetch with User
Agent: justsignal/1.0 (+http://justsignal.com) and RFERER:
we are seeing the same issue at our end. It gets better in the night
(PST) and then breaks in the morning.
I don't even see 403 but only 200s. Our (5 minutely) search request
comes back with none, one or two results at the max though i know
every minute there are about a 100 messages (as we'v
This is happening RIGHT NOW for the following:
1) Go to search.twitter.com and enter tweetsforboobs OR
tweetforboobs as the search.
2) Go to http://tweetsforboobs.org and see the twitter feed on the
left.
Notice that the last tweet from 2 hours ago (VerticalMeasures) is not
in the twitter feed
Will someone from Twitter please respond if there is an ETA to resolve
this issue. Work arounds can never be really as effective as the real
deal.
I'm having problems with my code because it looks like the search
method is returning the created_at date in the following format: Fri,
16 Oct 2009 16:40:25 +
Everything else, and the documentation is using this format: Tue Feb
24 16:38:44 + 2009
Is this being fixed?
Yes, this
Chad,
Sorry for not being clear. I was thinking about Abraham William's
suggestion above where Twitter Search API works with authenticated
sessions+rate limiting, instead of IP based rate filtering. Just so
you know, AppEngine has 30 second timeout on request to all AppEngine
urls, and 10 second
I would recommend just using a physical server and uploading a simple
php proxy script. If you have existing webspace, it will save you the
trouble of setting up an complete ec2 build just to run a proxy
script.
On Oct 9, 7:11 pm, Akshar akshar.d...@gmail.com wrote:
Thanks Abraham.
Any
So basically, if its not a 503 on the search API I should be clear?
On Oct 9, 5:11 pm, jmathai jmat...@gmail.com wrote:
Get used to receiving random 502 (and other response codes) from the
Twitter API. If you don't know exactly what the code means I suggest
retrying it. If it's explicit
Get used to receiving random 502 (and other response codes) from the
Twitter API. If you don't know exactly what the code means I suggest
retrying it. If it's explicit that you're being rate limited then
wait before you retry.
http://twitter.com/jkalucki/status/4686847704
Thanks Abraham.
Any pointers on how to setup a proxy on amazon ec2 for GAE?
On Oct 8, 6:07 pm, Abraham Williams 4bra...@gmail.com wrote:
Pretty much. You have limited options:
1) Run your Search API requests through a proxy where you will have
exclusive access to the IP.
2) Wait for V2 of
I have solved a problem like that:
While I receive an error 503 - my application continue knocking to
twitter with query.
Everything works ;)
http://apiwiki.twitter.com/Rate-limiting states that for cloud
platforms like Google App Engine, applications without a static IP
addresses cannot receive Search whitelisting.
Does that mean there is no way to avoid getting HTTP 503 response
codes to search requests from app engine?
On Oct 8,
Pretty much. You have limited options:
1) Run your Search API requests through a proxy where you will have
exclusive access to the IP.
2) Wait for V2 of the Twitter API where the REST and Search APIs get
combined so you can have authenticated search queries.
3) Hope Twitter slaps some duct tape on
I am also facing this issue. I'm only making a couple of requests
from GAE (about 3-4) and none of them are getting through, I keep
getting the following using Twitter4J
Twitter Exception while retrieving status
twitter4j.TwitterException: 400:The request was invalid. An
accompanying
Twitter should really in this case either white list all GAE IPs (I'm
sure an email to Google could get all IPs they use) or allow charging
API requests to an authenticated account rather than by IP (much like
the REST API does). This way each GAE application would just set up a
twitter account
Same here; my app runs on Google App Engine and 40% of the requests to
the Twitter Search API get the 503 error message indicating rate
limiting.
Is there anything we as app authors can do on our side to alleviate
the problem?
/Martin
On Oct 5, 1:53 pm, Paul Kinlan paul.kin...@gmail.com
Hi All,
GAE sites are problematic for the Twitter/Search API because the IPs
making outgoing requests are fluid and cannot as such be easily
allowed for access. Also, since most IPs are shared, other
applications on the same IPs making requests mean that fewer requests
per app get through.
One
Hi Chad,
I am sorry but that doesn't even help in the slightest.
You are essentially saying that we shouldn't develop on the App
Engine, since would now have to also buy a proxy. Which is completely
unfeasible and defeats the purpose of why people are using the app
engine.
I understand that
Hi. I have this problem too.
My application does two request per hour and it get rate limit.
What is wrong? I think it is twitter's problems
On 1 окт, 01:45, Paul Kinlan paul.kin...@gmail.com wrote:
Hi Guys,
I have an app on the App engine using the search API and it is getting
heavily
Hi all,
I am having the same issue. I have tried setting a custom user-agent,
but this doesn't seem to affect the fact that twitter is limiting
based on I.P. address. I'm only making about 5 searches an hour and
80% of them are failing on app engine due to a 503 rate limit.
Twitter needs to
I'm noticing this problem as well. I'm making only a couple requests
per hour. I have tried setting the user-agent and the HTTP_REFERER
headers to a custom name, but Twitter doesn't seem to care.
On Oct 5, 2:59 am, steel steel...@gmail.com wrote:
Hi. I have this problem too.
My application
I am pretty sure there are custom headers on the App Engine that indicate
the application that is sending the request.
2009/10/5 elkelk danielshaneup...@gmail.com
Hi all,
I am having the same issue. I have tried setting a custom user-agent,
but this doesn't seem to affect the fact that
add either -from:user or from:-user to the query (i can't quite remember
which).
On Fri, Oct 2, 2009 at 06:44, Greg gregory.av...@gmail.com wrote:
Is there a way to use the Search API to not return results from a
selected user?
--
Internets. Serious business.
(this could be overcome, I suppose, by performing multiple queries,
but that isn't much of a solution if you want to use the stock twitter
js search widget, etc)
On Sep 27, 11:37 am, zapnap npla...@gmail.com wrote:
Search API queries appear to be limited to 140 characters. I mean,
that's cute
Hello,
The limit is indeed 140 and most likely won't be going up any time
soon. The reason for the limit is for performance reasons. In order to
do timely queries we don't allow for longer/arbitrary queries which
could be very complex.
-Chad
On Mon, Sep 28, 2009 at 3:08 PM, zapnap
If you need to search specific users why don't you use the Shadow API
and grab all of their tweets and then search them locally?
On Sep 28, 3:14 pm, Chad Etzel c...@twitter.com wrote:
Hello,
The limit is indeed 140 and most likely won't be going up any time
soon. The reason for the limit is
Search API will rock if it would only be reliable
what we see looks to be some sort of a funky cache, a query (atom)
can be missing some latest tweets and then after a while they show up,
if you tweak the query you can see 'em.
you ever seen this problem?
also what did you do special with user
The Search team is working on indexing latency and throughput, along
with a many other things. There have been big improvements recently
and more are on the way.
In the mean time, if you need closer to real-time results, consider
the track parameter on the Streaming API.
-John Kalucki
John, the original message of this thread is about rate limit being
totally erratic, as several users have noticed. here is the detail of
what I'm seeing:
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/40c82b4dbc0536bd
Here is another user reporting the problem :
Various APIs have their own rate limiting mechanisms. The www, search
and streaming rate limits are all customized to their usage patterns
and share little to no code and/or state.
-John
On Sep 4, 9:49 am, Reivax xavier.yo...@gmail.com wrote:
John, the original message of this thread is about
Dewald,
I'm not on the search team, but there are a lot of discussions over
there this morning about search api rate limits and related issues.
Search rate limiting issues (vs. www.twitter.com or api.twitter.com)
probably boil down to one of three categories:
1) Search service interruptions -
The rpp defaults to 15 or something if you don't specify it. Sounds
like you need to mess around and play with things a bit more.
The key to max search results isn't in paging or rpp, but in max_id.
Be careful what you ask for. Retrieval of everything available can
take a long time (hours)
The key to max search results isn't in paging or rpp, but
in max_id.
Hi David,
I do not understand how max_id can help me.
If I want to get the 10,000 most recent tweets that match
the phrase michael jackson changing the max_id value
doesn't seem like it's going to help at all.
In fact,
Yes, earlier in the week we saw a lot of these reported by TweetDeck
users too. Seems to have tailed off now though.
On Aug 20, 4:42 pm, Marco Kaiser kaiser.ma...@gmail.com wrote:
Hi,
we are receiving an increasing number of reports from users about search
results containing tweets that don't
I have seen the same which is affecting quality of results at
Twaller.com. I have communicated this issue with the Twitter Team, you
can see my post at this forum 3-4 daya back. This seems to be a very
recent phenomenon.
On Aug 20, 7:42 am, Marco Kaiser kaiser.ma...@gmail.com wrote:
Hi,
we are
Hi Chad,
we are getting reports from the users of our desktop clients, so the user
agent will either contain twhirl or Seesmic Desktop. We'll try to get
the queries used from our users, but unfortunately, we'll not be able to
provide any of the other information, as it all happens on users'
On Tue, Aug 11, 2009 at 10:30 AM, David Fishertib...@gmail.com wrote:
While i haven't done scientific testing of this, I was able to run up
to 3-4 instances of my search script prior at a time before it told me
to enhance my calm. Now I'm barely able to run one without hitting the
limit. I
The user agent for each search request is the same. I'm using the Ruby
Twitter API wrapper, so sending anything else with search requests
isn't possible unless that is now deprecated.
dave
On Aug 11, 10:36 am, Andrew Badera and...@badera.us wrote:
On Tue, Aug 11, 2009 at 10:30 AM, David
David,
I don't know Ruby, so I don't know if this is possible.
But, if possible you need to edit your copy of the Twitter API wrapper
and set the user agent to something that is unique to your service.
If you use the same user agent as everyone else who are using that
wrapper, then you are
Hi Dave,
I'm not sure which twitter wrapper you are using. But if you're using Dan
Croak's from here:
http://github.com/dancroak/twitter-search
You might need to update your gem, and make sure you specify the name of
your app as the agent instead of using the default twitter-search.
Yu-Shan
In addition to setting a unique user-agent, I believe it was requested that
we set a referrer header that pointed back to a domain.
On Tue, Aug 11, 2009 at 9:30 AM, David Fisher tib...@gmail.com wrote:
While i haven't done scientific testing of this, I was able to run up
to 3-4 instances of
The referrer is not as important as the user-agent. You can also put
your URL in the user-agent instead.
-Chad
On Tue, Aug 11, 2009 at 4:09 PM, Larry Wrightlarrywri...@gmail.com wrote:
In addition to setting a unique user-agent, I believe it was requested that
we set a referrer header that
I don't have a domain to point back to. I'm doing data-mining and
analysis on a server that isn't public.
I have set the User-Agent to something unique (I thought you were
saying to change it for every request?).
Yet I'm still getting rate limited and told to back off a lot. Ryan S
said it might
Doug,
Is there any status update on this issue? Users are really starting to
get frustrated with results and wondering what the status is on things
getting back to being consistent...
Thanks!
Brooks
On Jul 21, 3:45 pm, Doug Williams d...@twitter.com wrote:
Chad,Your assessment is spot on.
Matt,
Here is another thread pseudo-related to the issue.
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/b7b6859620327bad/77927af246c77907#77927af246c77907
Again, thanks to Chad.
Brooks
On Jul 21, 1:35 pm, matthew jesc...@gmail.com wrote:
Chad,
Good to know.
Chad,
It looks like your mi units parameter has been truncated to m.
When I add i to the string it works for me. It may be that it is
returning results withing 5 meters.
Matthew
On Jul 22, 3:25 pm, Chad Etzel jazzyc...@gmail.com wrote:
Did the geocode operator stop working?
I just tried a
On Wed, Jul 22, 2009 at 4:03 PM, matthewjesc...@gmail.com wrote:
Chad,
It looks like your mi units parameter has been truncated to m.
When I add i to the string it works for me. It may be that it is
returning results withing 5 meters.
Doh! You're right... added the 'i' and all is well.
Brooks,
Thanks for the link - helps me understand some of the stuff I've been
seeing.
Matthew
On Jul 22, 1:15 pm, Brooks Bennett bsbenn...@gmail.com wrote:
Matt,
Here is another thread pseudo-related to the issue.
http://groups.google.com/group/twitter-development-talk/browse_thread...
That usually happens when the search servers get out of sync and the
since_id tweet hasn't been indexed on the other server(s) yet, so it
thinks it's a tweet from the future.
-Chad
On Tue, Jul 21, 2009 at 12:38 PM, matthewjesc...@gmail.com wrote:
I am polling the Search API and intermittently
Chad,
Good to know. Thanks for your help.
Matthew
On Jul 21, 2:13 pm, Chad Etzel jazzyc...@gmail.com wrote:
That usually happens when the search servers get out of sync and the
since_id tweet hasn't been indexed on the other server(s) yet, so it
thinks it's a tweet from the future.
-Chad
Chad,Your assessment is spot on.
At the heart of search there are a number of data stores that accept queries
(reads) while at the same time perform writes from an indexer. Heavy load --
large numbers of queries, large number of writes or both, or both -- can
cause the write replication between
Thanks for posting this Chad!
Doug, please keep us updated on how things progress with this issue so
we can pass along guidance to our user-base. Hopefully the
improvements will come in the near-term.
Thanks for all that you guys do!
Brooks
On Jul 21, 3:45 pm, Doug Williams d...@twitter.com
Same thing here, since_id is totally ignored and I'm getting
duplicated results
On Jul 14, 12:50 pm, Chad Etzel jazzyc...@gmail.com wrote:
I'm noticing something strange in my search logs at the moment... I'm
getting back a full set of results (number of results = rpp) when
using since_id
I seem to be having a similar issue, for the last 30 minutes or so.
-Ryan
On Jul 14, 1:50 pm, Chad Etzel jazzyc...@gmail.com wrote:
I'm noticing something strange in my search logs at the moment... I'm
getting back a full set of results (number of results = rpp) when
using since_id when I
For others' edification:
Twitter devs have said this is a bug and they are actively working on
resolving it. In the mean time, I am checking search result IDs
against the since_id I passed in and just cut off duplicate results
before I do anything with them... This seems to be a good general
Because search was originally a separate company that Twitter acquired. And
they didn't provide XML.
There a plan to fix this:
http://apiwiki.twitter.com/V2-Roadmap#MergingRESTandSearchAPIs
On Mon, Jul 6, 2009 at 10:03, Carlos carlos.crose...@gmail.com wrote:
Hi, bu looking at the search API
Atom is XML
On Jul 6, 8:03 am, Carlos carlos.crose...@gmail.com wrote:
Hi, bu looking at the search API docs I see the output format is JSON
and Atom, why not X-ML? Forgive me I haven´t tried myself to request
xml to see what I get, but hopefully the docs are obsoletea and XML is
As is RSS but but RSS, XML, json and Atom are the four formats that Twitter
provides on various methods.
Abraham
On Mon, Jul 6, 2009 at 16:35, Ben Metcalfe ben.metca...@gmail.com wrote:
Atom is XML
On Jul 6, 8:03 am, Carlos carlos.crose...@gmail.com wrote:
Hi, bu looking at the search API
I think he means the XML schema that's returned if you use the .xml suffix
for many API calls.
On Mon, Jul 6, 2009 at 15:35, Ben Metcalfe ben.metca...@gmail.com wrote:
Atom is XML
On Jul 6, 8:03 am, Carlos carlos.crose...@gmail.com wrote:
Hi, bu looking at the search API docs I see the
With one call to the statuses/show method [1] you could have all of the
information you need to construct the permanent URL.
1. http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-statuses%C2%A0show
Thanks,
Doug
On Thu, Jun 25, 2009 at 8:31 AM, jesse je...@mailchimp.com wrote:
I've been
Thanks for the quick reply, Doug. From that I would create:
http://twitter.com/dougw/status/1472669360
if you change your screen name, that link is going to break. If it
didn't, I'd be fine with the Search API since it include screen_names
and status ids.
Or am I being obtuse and missing
It would break if I changed my screen name but that is a very rare case. If
your application depends deeply on these links not breaking, I'd suggest you
cache status objects for a day, and refresh the cache and links daily.
Thanks,
Doug
On Thu, Jun 25, 2009 at 9:07 AM, jesse
Yes I'm seeing this also, with this query:
http://search.twitter.com/search.json?lang=enshow_user=truerpp=100since=2009-06-25until=2009-06-25q=Cloud
On Jun 24, 7:57 am, Mojosaurus ish...@gmail.com wrote:
Hi,
My script polls Twitter APIs once every 15 seconds with a query
Hi all,
I'm fairly new to app development and am working with Google Appengine
at the moment. My app (http://www.twitwheel.com/) makes two calls to
the search API for each page view. I've just added the user agent to
my urlfetch calls. Do I still need to worry about the 100/hour rate
limit? I've
Hmm, yes. I am seeing the same thing with the geocode: and source:
modifiers. Is this a bug?
-Chad
On Wed, Jun 24, 2009 at 7:57 AM, Mojosaurusish...@gmail.com wrote:
Hi,
My script polls Twitter APIs once every 15 seconds with a query like
We are seeing this as well.
On Jun 24, 4:57 am, Mojosaurus ish...@gmail.com wrote:
Hi,
My script polls Twitter APIs once every 15 seconds with a query
likehttp://search.twitter.com/search.atom?q=video%20filter:linksrpp=100;...
Starting 2009-06-23, this API returns http 403, with the
My script polls Twitter APIs once every 15 seconds with a query
likehttp://search.twitter.com/search.atom?q=video%20filter:linksrpp=100;...
Starting 2009-06-23, this API returns http 403, with the following
error message.
hash
errorsince date or since_id is too old/error
/hash
On Wed, Jun 24, 2009 at 4:02 PM, Cameron Kaiserspec...@floodgap.com wrote:
I believe this error occurs when the search result would generate more than
one page of results and a since argument (since or since_id) is given.
Certainly something like that is bound to at some point, even at 100
I think you misspelled Ar, matey!
On Tue, Jun 16, 2009 at 9:22 PM, Brian Gilham bgil...@gmail.com wrote:
R
--
*From*: Doug Williams
*Date*: Tue, 16 Jun 2009 17:31:11 -0700
*To*: twitter-development-talk@googlegroups.com
*Subject*: [twitter-dev] Re
Doug,
citing from your original mail:
Any request not including this information will be returned a 403 Forbidden
response code by our web server.
How does it map to what you say now, that a best effort is sufficient, if
you reject any request without those header(s) with a 403 response? Again,
Marco,
I was giving us breathing room. In 6 days, we will require this data but
enforcement will be manual in most cases. My strict language above is to
ensure that developers know we reserve the right to terminate their
applications without warning if they are abusing the system and not
including
Matt Doug,
Here's some more information to help fingerprint search requests:
The MGTwitterEngine library sends the following X headers by default:
X-Twitter-Client: MGTwitterEngine
X-Twitter-Client-Url: http://mattgemmell.com/source
X-Twitter-Client-Version: 1.0
These can be overridden by
Hi Craig,
I didn't know about the X-Twitter-Client headers, thanks for the
info.
Thanks;
– Matt Sanford / @mzsanford
Twitter Dev
On Jun 17, 2009, at 10:09 AM, Craig Hockenberry wrote:
Matt Doug,
Here's some more information to help fingerprint search requests:
The
Craig,
That is an excellent example of what we would like to see. You've identified
your application and given us the URL to learn about it. Perfect.
Thanks for sharing.
Doug
On Wed, Jun 17, 2009 at 10:15 AM, Matt Sanford m...@twitter.com wrote:
Hi Craig,
I didn't know about the
Setting the user agent is not only in the best interest of Twitter.
It's in your best interest as well.
I've been setting my user agent from almost day #1 of my service, and
on several occasions it has helped me to get quick response and issue
resolution from the API team for both REST and
Thanks Doug - Any additional info to help us know if we comply? My dev is
out of the country on vacation and want to make sure we don¹t miss anything.
On 6/16/09 11:33 AM, Doug Williams d...@twitter.com wrote:
Hi all,
The Search API will begin to require a valid HTTP Referrer, or at the very
Indeed, some clearer criteria would be most appreciated.
--
Ed Finkler
http://funkatron.com
Twitter:@funkatron
AIM: funka7ron
ICQ: 3922133
XMPP:funkat...@gmail.com
On Jun 16, 12:51 pm, Justyn Howard justyn.how...@gmail.com wrote:
Thanks Doug - Any additional info to help us know if we comply?
Thanks, pretty sure we do both. Will this new (or newly enforced) policy
help clean up some garbage?
On 6/16/09 11:56 AM, Doug Williams d...@twitter.com wrote:
All we ask is that you include a valid HTTP Referrer and/or a User Agent with
each request which is easy to do in almost every
The logical thing would be to set the referrer to the domain name of
your application. If it doesn't have one I'd say use your Twitter user
URL (i.e. http://twitter.com/stut).
Most HTTP libs in most languages will set a default user agent, and
it's usually pretty easy to override it. I'd suggest
It's optional in the HTTP spec, but mandatory for the Twitter Search
API. I don't see a problem with that.
Doug: Presumably the body of the 403 response will contain a suitable
descriptive error message in the usual format?
-Stuart
--
http://stut.net/projects/twitter
2009/6/16 Naveen Kohli
On Tue, Jun 16, 2009 at 1:05 PM, Stuartstut...@gmail.com wrote:
It's optional in the HTTP spec, but mandatory for the Twitter Search
API. I don't see a problem with that.
Erm, for sites like TweetGrid, TweetChat, etc, which are all
browser-based client-side driven sites, the users' browser
Totally understand the need. I asked for clearer criteria because in
message one, you state you'll require
a valid HTTP Referrer or a meaningful and unique user agent
I can probably define a valid HTTP Referrer as containing a URL that
exists, but a meaningful/unique user agent is somewhat in
Hi all,
Let me clarify a bit. For server-side processing please set the
User-Agent header. I recommend using your domain name, or if you don't
have one (which is odd) your appname. Something like myapp.com or
myapp. By using domain name we'll be able to check out the site and
reach
I checked and TweetGrid was setting a referrer (on the page I tested,
it was http://tweetgrid.com/grid?l=0), and as Matt said all should be
fine for us Client-side Search API peeps.
Brooks
On Jun 16, 12:10 pm, Chad Etzel jazzyc...@gmail.com wrote:
On Tue, Jun 16, 2009 at 1:05 PM,
Thanks for chiming in on this Chad!
On Jun 16, 12:10 pm, Chad Etzel jazzyc...@gmail.com wrote:
On Tue, Jun 16, 2009 at 1:05 PM, Stuartstut...@gmail.com wrote:
It's optional in the HTTP spec, but mandatory for the Twitter Search
API. I don't see a problem with that.
Erm, for sites like
2009/6/16 Chad Etzel jazzyc...@gmail.com
On Tue, Jun 16, 2009 at 1:05 PM, Stuartstut...@gmail.com wrote:
It's optional in the HTTP spec, but mandatory for the Twitter Search
API. I don't see a problem with that.
Erm, for sites like TweetGrid, TweetChat, etc, which are all
browser-based
Hey guys.
This has already been banged out in the RSS wars (of which I'm a
veteran and have the battle scars).
Don't use a Referrer unless it's literally a page with a link or
search page.
You should use a User-Agent here (which is what it is designed for).
The browser should generally send
Redefining HTTP spec, eh :-)
Whatever makes twitter boat float. Lets hope for the best. Just concerned
that some firewalls or proxies tend to remove referrer.
On Tue, Jun 16, 2009 at 1:05 PM, Stuart stut...@gmail.com wrote:
It's optional in the HTTP spec, but mandatory for the Twitter Search
If the User-Agent/Referrer says Twitpay, and it's really me, when Twitter
contacts me, I'll answer, and we'll work it out.
If the User-Agent/Referrer says Twitpay, and it's *not* really me, when
Twitter contacts me, I'll tell them, and they'll block the IP.
It's a starting point for figuring
I agree with Stuart, this might be tricky for client applications that are
running behind firewalls / proxies that might remove both header fields, and
neither the app author nor the user might have any control over this.
Finally, that means you'll lock out those people from using search in their
How does one set the http referrer and user agent?
On Jun 16, 12:33 pm, Doug Williams d...@twitter.com wrote:
Hi all,
The Search API will begin to require a valid HTTP Referrer, or at the very
least, a meaningful and unique user agent with each request. Any request not
including this
On Tue, Jun 16, 2009 at 5:05 PM, Matt Sanfordm...@twitter.com wrote:
Hi there,
While all of this flame is keeping my feet warm it's not really
productive.
Are you sure this is a flame war as defined by RFC 1855 [1]?
...sorry, had to :)
-Chad
[1] http://www.faqs.org/rfcs/rfc1855.html
Matt,
far from getting into RFC debates, but really concerned for the non-server
apps out there, which may not have full control over the network
infrastructure they run on. If I set up my own server(s) at a data center, I
sure can take care of sending you the right referrer and user-agent, but
You are still missing my point - desktop clients may not be able to send a
User Agent or Referrer, based on the network infrastructure the use is
locked into. Nothing in your repsonse addressed this issue.
I am fully willing to send the requested data in the clients (and I already
do), but I have
R
-Original Message-
From: Doug Williams d...@twitter.com
Date: Tue, 16 Jun 2009 17:31:11
To: twitter-development-talk@googlegroups.com
Subject: [twitter-dev] Re: Search API to require HTTP Referrer and/or User
Agent
For most applications, enforcement of this requirement
Hi there,
To get more results you'll need to paginate. We cannot offer an
API that returns thousands (or millions) or results in one request.
Thanks;
– Matt Sanford / @mzsanford
Twitter Dev
On May 31, 2009, at 5:53 PM, Joseph wrote:
If I do a search the API, is there an easier
Hi Jim,
There is no known issue but if you can provide the curl command
you're using we might be able to help.
Thanks;
– Matt Sanford / @mzsanford
Twitter Dev
On May 26, 2009, at 5:31 PM, Jim Whimpey wrote:
The API seems to be ignoring my rpp parameter. On the website I change
101 - 200 of 215 matches
Mail list logo