Marco,
I was giving us breathing room. In 6 days, we will require this data but
enforcement will be manual in most cases. My strict language above is to
ensure that developers know we reserve the right to terminate their
applications without warning if they are abusing the system and not
including this required information.

Thanks,
Doug




On Wed, Jun 17, 2009 at 3:32 AM, Marco Kaiser <kaiser.ma...@gmail.com>wrote:

> Doug,
>
> citing from your original mail:
>
> "Any request not including this information will be returned a 403
> Forbidden response code by our web server."
>
> How does it map to what you say now, that "a best effort is sufficient", if
> you reject any request without those header(s) with a 403 response? Again, I
> am not fearing an IP or User-Agent ban because of not-sent header data; what
> I fear is a rejection of search requests when the header data is removed by
> network gear. At least that's how I read your announcement for this change -
> or am I wrong? Will you only reject requests for certain IPs that have high
> volume based on the Referrer/User-Agent requirement, but in general the
> Search API doesn't require it to be present?
>
>
> Marco
>
> 2009/6/17 Doug Williams <d...@twitter.com>
>
>> For most applications, enforcement of this requirement will be subject to
>> manual review. We want a marker (Referrer and/or User Agent) to help
>> understand who top searchers are when problems arise and if we can determine
>> a better data access plan for their needs. End-users and clients never hit
>> our trip-wires as they are not individually querying the API with enough
>> frequently warrant a manual review. For your needs, Marco, a best effort to
>> include a the requested data is sufficient on our end and will not cause any
>> problems if the data is removed by network gear.
>>
>> Services that are in cloud-based hosts, such as EC2 and AppEngine will
>> however be subject to programmatic enforcement of this policy. Additionally,
>> we reserve the right to add hosts to this if we find that a host is being
>> used to exploit our service. This is to protect the service against abuse
>> which often comes from shared hosts such as these.
>>
>> Thanks,
>> Doug
>>
>>
>>
>>
>> On Tue, Jun 16, 2009 at 3:19 PM, Marco Kaiser <kaiser.ma...@gmail.com>wrote:
>>
>>> You are still missing my point - desktop clients may not be able to send
>>> a User Agent or Referrer, based on the network infrastructure the use is
>>> locked into. Nothing in your repsonse addressed this issue.
>>>
>>> I am fully willing to send the requested data in the clients (and I
>>> already do), but I have no means to make sure they reach you. So if they
>>> don't, even though I am doing all you ask me to do, you'll still lock out
>>> the user from search in his client. I am not worried to be blocked or
>>> whatever, it's merely that the requirement to provide one of the two HTTP
>>> headers may not be possible for client apps. So low volume clients (in terms
>>> of client-per-IP numbers, not overall) clearly WILL be affected.
>>>
>>> Marco
>>>
>>> 2009/6/17 Doug Williams <d...@twitter.com>
>>>
>>> As you have determined, we just a better way to track who is making
>>>> requests and at what volume. If you are doing janky things and we don't 
>>>> know
>>>> who you are (no referrer or user agent) then we have no contact for your
>>>> application. We will block the IP address and move on.
>>>>
>>>> However if you would like to give us a chance to work with you before
>>>> terminating your access unexpectedly, please provide us with enough of a
>>>> hint (through a HTTP Referrer and/or User Agent) to determine who you are 
>>>> so
>>>> we can have any necessary conversations.
>>>>
>>>> We do not feel that this is not an unreasonable request. Low volume
>>>> clients will not be affected. Anyone doing anything that bubbles to the top
>>>> of logs however may be subject to scrutiny.
>>>>
>>>> Thanks,
>>>> Doug
>>>>
>>>>
>>>> --
>>>> Do you follow me? http://twitter.com/dougw
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jun 16, 2009 at 2:47 PM, Chad Etzel <jazzyc...@gmail.com>wrote:
>>>>
>>>>>
>>>>> Perhaps some sort of signature/app value in the URL request query
>>>>> string? That will make it through proxies and firewalls, and is just
>>>>> as easily spoofed as HTTP-Referrer and User-Agents...
>>>>>
>>>>> -Chad
>>>>>
>>>>> On Tue, Jun 16, 2009 at 5:36 PM, Marco Kaiser<kaiser.ma...@gmail.com>
>>>>> wrote:
>>>>> > Matt,
>>>>> >
>>>>> > far from getting into RFC debates, but really concerned for the
>>>>> non-server
>>>>> > apps out there, which may not have full control over the network
>>>>> > infrastructure they run on. If I set up my own server(s) at a data
>>>>> center, I
>>>>> > sure can take care of sending you the right referrer and user-agent,
>>>>> but
>>>>> > unfortunately that's not the case in many environments behind
>>>>> firewalls and
>>>>> > / or proxies.
>>>>> >
>>>>> > What's your point on that? I fully understand your intention and the
>>>>> need
>>>>> > for getting some identification - so happy to discuss anything
>>>>> that'll also
>>>>> > work through restricted network access.
>>>>> >
>>>>> > Thanks,
>>>>> > Marco
>>>>> >
>>>>> > 2009/6/16 Matt Sanford <m...@twitter.com>
>>>>> >>
>>>>> >> Hi there,
>>>>> >>     While all of this flame is keeping my feet warm it's not really
>>>>> >> productive. This isn't Slashdot comments, let's try and remain on
>>>>> topic
>>>>> >> rather the getting into RFC debates. To be even more explicit than
>>>>> my
>>>>> >> previous email: Use the user-agent. Referrer will be taken care of
>>>>> by
>>>>> >> browsers and I see as a fallback for client-side JSON
>>>>> >> users rather than a replacement for a user-agent.
>>>>> >>     The subsequent reply from Michael Ivey about how this helps is
>>>>> dead
>>>>> >> on. With no context at all I'm forced to block all of
>>>>> ECS/AppEngine/Yahoo
>>>>> >> Pipes is one person misbehaves. Nobody likes that. Since search is
>>>>> not
>>>>> >> authenticated OAuth does not really help here. We may be forced to
>>>>> make
>>>>> >> search authenticated if we can't find a reasonable way to sort the
>>>>> good from
>>>>> >> the bad. This is a first attempt at helping us cut out poorly build
>>>>> spam
>>>>> >> scripts and shorten the time I spend researching each abuser. It
>>>>> saves time
>>>>> >> and lets me fix more bugs, assuming I don't spend the newly saved
>>>>> time in
>>>>> >> RFC debates, that is :)
>>>>> >>
>>>>> >> Thanks;
>>>>> >>  – Matt Sanford / @mzsanford
>>>>> >>      Twitter Dev
>>>>> >> On Jun 16, 2009, at 12:39 PM, Stuart wrote:
>>>>> >>
>>>>> >> 2009/6/16 Naveen Kohli <naveenko...@gmail.com>
>>>>> >>>
>>>>> >>> Redefining HTTP spec, eh :-)
>>>>> >>> Whatever makes twitter boat float. Lets hope for the best. Just
>>>>> concerned
>>>>> >>> that some firewalls or proxies tend to remove "referrer".
>>>>> >>
>>>>> >> What a completely ridiculous thing to say. It's not "redefining"
>>>>> anything.
>>>>> >> If Twitter want to require something in order to access their
>>>>> service they
>>>>> >> absolutely have that right. It's not like they're saying every HTTP
>>>>> server
>>>>> >> should start requiring these headers.
>>>>> >> It's true that some firewalls and proxies remove the referrer
>>>>> header, and
>>>>> >> some also remove the user agent header.
>>>>> >> I'm somewhat unclear on exactly how this stuff is supposed to help.
>>>>> If an
>>>>> >> application sets out to abuse the system they'll simply set the
>>>>> headers so
>>>>> >> they look like a normal browser. I don't see what purpose requiring
>>>>> these
>>>>> >> headers to be something useful will actually serve. IMHO you might
>>>>> as well
>>>>> >> "require" the source parameter for all API requests that use basic
>>>>> auth
>>>>> >> which is simple for all apps to implement; OAuth clearly carries
>>>>> >> identification with it already.
>>>>> >> -Stuart
>>>>> >> --
>>>>> >> http://stut.net/projects/twitter
>>>>> >>>
>>>>> >>> On Tue, Jun 16, 2009 at 1:05 PM, Stuart <stut...@gmail.com> wrote:
>>>>> >>>>
>>>>> >>>> It's optional in the HTTP spec, but mandatory for the Twitter
>>>>> Search
>>>>> >>>> API. I don't see a problem with that.
>>>>> >>>>
>>>>> >>>> Doug: Presumably the body of the 403 response will contain a
>>>>> suitable
>>>>> >>>> descriptive error message in the usual format?
>>>>> >>>>
>>>>> >>>> -Stuart
>>>>> >>>>
>>>>> >>>> --
>>>>> >>>> http://stut.net/projects/twitter
>>>>> >>>>
>>>>> >>>> 2009/6/16 Naveen Kohli <naveenko...@gmail.com>:
>>>>> >>>> > Why would you make decision based on "Referrer" which is an
>>>>> OPTIONAL
>>>>> >>>> > header
>>>>> >>>> > field in HTTP protocol? Making decision based on something that
>>>>> is
>>>>> >>>> > "REQUIRED" may be more appropriate.
>>>>> >>>> >
>>>>> >>>> >
>>>>> >>>> > On Tue, Jun 16, 2009 at 12:33 PM, Doug Williams <
>>>>> d...@twitter.com>
>>>>> >>>> > wrote:
>>>>> >>>> >>
>>>>> >>>> >> Hi all,
>>>>> >>>> >> The Search API will begin to require a valid HTTP Referrer, or
>>>>> at the
>>>>> >>>> >> very
>>>>> >>>> >> least, a meaningful and unique user agent with each request.
>>>>> Any
>>>>> >>>> >> request not
>>>>> >>>> >> including this information will be returned a 403 Forbidden
>>>>> response
>>>>> >>>> >> code by
>>>>> >>>> >> our web server.
>>>>> >>>> >>
>>>>> >>>> >> This change will be effective within the next few days, so
>>>>> please
>>>>> >>>> >> check
>>>>> >>>> >> your applications using the Search API and make any necessary
>>>>> code
>>>>> >>>> >> changes.
>>>>> >>>> >>
>>>>> >>>> >> Thanks,
>>>>> >>>> >> Doug
>>>>> >>>> >
>>>>> >>>> >
>>>>> >>>> >
>>>>> >>>> > --
>>>>> >>>> > Naveen K Kohli
>>>>> >>>> > http://www.netomatix.com
>>>>> >>>> >
>>>>> >>>
>>>>> >>>
>>>>> >>>
>>>>> >>> --
>>>>> >>> Naveen K Kohli
>>>>> >>> http://www.netomatix.com
>>>>> >>
>>>>> >>
>>>>> >
>>>>> >
>>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to