You should get a "verified account" since they'll presumably want to be a
trusted provider of information.
- h
On Mon, Sep 7, 2009 at 20:01, spyrrow wrote:
>
> I need to set up a Twitter account for one of my clients, which is a
> public library. Anything I need to set up differently then what
How do I get the number of results for a given search phrase? I don't
want the results themselves, I just want to know the size of the
results set for any given phrase. For example "jnni hinklebootmurgh"
returns 0 results, where "michael jackson" returns gazillions. How can
I get just the total nu
I need to set up a Twitter account for one of my clients, which is a
public library. Anything I need to set up differently then what is
provided on your "set up account" page?
What would you suggest?
Spyrrow
hi
Can any one help me to do follow and reply in twitter API using auth
tokens
thanks
For the last three days I've had half or more of my site's posts fail
to go through. I don't yet have full debugging results to show what I
was receiving each time it failed, but I want to share this bit now.
This server is not yet on the API whitelist. It makes 3-5 API requests
every hour for mos
John:
Will the "third system" be used if, e.g., the user has 1000 friends
and we request friends/ids WITHOUT pagination? Or must we include
pagination arguments even if <5000 to use the third system?
PJB
On Sep 7, 9:52 pm, John Kalucki wrote:
> I don't know all the details, but my general u
I don't know all the details, but my general understanding is that
these bulk followers calls have been heavily returning 503s for quite
some time now, and this is long established, but bad, behavior. These
bulk calls are hard to support and they need to be moved over to some
form of practical pag
I might add that, as ever, a message on status.twitter mentioning this
would really go a long way.
David.
On Sep 8, 5:27 am, "David W." wrote:
> Hi John,
>
> On Sep 6, 3:59 pm, John Kalucki wrote:
>
> > resources. There is minor pagination jitter in one case and a certain
> > class of row-cou
Hi John,
On Sep 6, 3:59 pm, John Kalucki wrote:
> resources. There is minor pagination jitter in one case and a certain
> class of row-count-based queries have to be deprecated (or limited)
> and replaced with cursor-based queries to be practical. For now, we're
> sending the row-count-queries
I could really go for "jittery" right now... instead I'm getting
"totally broken"!
I'm getting two pages of results, using ?page=x, then empty. To me, it
looks like all my accounts have max 10K followers. I'd love some kind
of official response from Twitter on the status of paging (John?).
Examp
I think the main question is : when will be able to retrieve status
from protected users via the search or stream api (if authentified and
allowed off course).
I have some protected account i'm using to archive IRC conversations
and where i'd like to be able to search in (without a search feature
Technically possible or not, streaming protected statuses isn't a
current priority. In my opinion, and in my opinion only, it's also not
a good idea, regardless of the safeguards employed.
-John Kalucki
http://twitter.com/jkalucki
Services, Twitter Inc.
On Sep 6, 11:16 am, Monica Keller wrote:
Personally, I think it would be great if the Streaming API could
support streaming the with_friends timeline. There are many compelling
use cases.
You can simulate streaming the with_friends timeline by grabbing your
following list to populate a follow parameter to the /1/statuses/
filter.format
This describes what I'd call row-based pagination. Cursor-based
pagination does not suffer from the same jitter issues. A cursor-based
approach returns a unique, within the total set, ordered opaque value
that is indexed for constant time access. Removals in pages before or
after do not affect the
Do the apparently extraneous tweets happen to be in_reply_to a
specified user_id?
-John Kalucki
http://twitter.com/jkalucki
Services, Twitter Inc.
On Sep 5, 10:17 pm, Steve Farrell wrote:
> Hi!
>
> I'm just getting started using the follow API. We got access to
> "shadow" (thanks!) and I'm ta
This issue was aired considerably in this thread:
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/8665766f5e262d60
On Sep 7, 9:33 am, Monica Keller wrote:
> +1 definitely !
>
> I think everyone asks Twitter the same question but the problem was
> they developed the
If you are having connection problems like this, please send your IP
address, account(s), a curl(1) -vvv trace, and a tcpdump of the
failure to a...@twitter.com.
-John Kalucki
http://twitter.com/jkalucki
Services, Twitter inc.
On Sep 7, 5:02 pm, fablau wrote:
> I am having the same issue, most
I am having the same issue, most of the times I cannot connect to
Twitter, I get 408 error and the API is mostly unusable form my side.
I am able to connect just a couple of times every 36-48 hours! Are we
the only people having this issue? How that can be possible? Is there
any way to contact Twi
Als ik een tweet zend kom ik deze naderhand niet tegen bij degene die
ik volg.
Als er een tweet gestuurd woord door de gene die ik volg komt hij wel
op mijn site maar niet andersom.
heb ik iets verkeerd ingesteld soms?
Dick Hofman
Hi,
Adding some kind of weight value or sorting of the returned trending
topics would be a very good feature, for example to be able to create
nice word clouds.
/Håkan
On Aug 24, 6:21 am, Chad Etzel wrote:
> Hi,
>
> There is currently no way to get the number of retweets/tweets of
> trending t
We've been seeing 408's since the DOS attack back in July/August.
They feel like rate limiting on Twitter's part when overloaded.
Cannot tell since 408 isn't listed as an error they throw at
api.twitter.com
JEff
On Sep 6, 8:51 am, bosher wrote:
> Random 408 errors are being returned when user
+1 definitely !
I think everyone asks Twitter the same question but the problem was
they developed the firehose prior to PHSB
What are the main cons of PHSB ?
On Mon, Sep 7, 2009 at 8:48 AM, Jesse Stay wrote:
> Not necessarily. See this document (which I've posted earlier on this list)
> for d
I've been playing around with updating profile images through the API,
and while the main twitter site updates instantly, it's clear that
third party clients tend to cache profile images pretty aggressively.
I'm in the process of testing this across a few different clients
myself, but I was curio
One question is how does friendfeed handle protected updates since
the streaming api is for public statuses only ?
On Mon, Sep 7, 2009 at 6:32 AM, John Kalucki wrote:
>
> Friendfeed consumes the Twitter Streaming API to update Twitter
> status. SUP is not employed.
>
> All Twitter accounts have
Treat it as event log.
Sort in inverse order of the date they became followers and return it
in pages which include adds and deletes. This will allow an in synch
copy of the data to be mantained elsewhere.
On Mon, Sep 7, 2009 at 5:09 AM, Dewald Pretorius wrote:
>
> I don't understand why it would
Why would you use the db ? Just do it all in memory right ?
On Mon, Sep 7, 2009 at 6:52 AM, Dewald Pretorius wrote:
>
> SUP will not work for Twitter or any other service that deals with
> very large data sets.
>
> In essence, a Twitter SUP feed would be one JSON array of all the
> Twitter user
Hi, this application http://twitter.com/oauth_clients/details/16368 is
not posting, I keep getting the following error message, please can
you advise what might be wrong?
Woah there!
This page is no longer valid. It looks like someone already used the
token information you provided. Please return
The same also, blank 4.01 .
I've been having some status updates fail using oAuth with .NET over
the last few days. It seems to be an intermittent problem, and, like
yours, my code's been working fine for months...
Cheers,
Rich.
On Sep 5, 2:20 am, Bobby Gaza wrote:
> Hi,
>
> I was curious if anyone has seen any calls to
I am seeing the 200 "errors" also from our sites. I tried getting
status using curl and it returns the 200 HTML and then the status
intermittently. If I specify JSON I still get the HTML on the errors
and JSON data on the status. This is really affecting our website.
Hi,
I have to implement updating Twitter status through JS.
Need pointers on how to get started
I am able to consistently reproduce this error. I am making GET
requests via PHP from IP: 96.30.16.192. I receive the error without
fail after periods of inactivity lasting 2 hours or more. The header
response code is 200. Please let me know if I can provide any
additional info that might help
I am able to consistently reproduce this error -- I get this response
almost without fail after periods of inactivity greater than 2 hours.
I am requesting XML via PHP, it's a GET request. The requests are
coming from 96.30.16.192. Let me know if I can provide any additional
info that might help
The point of it all would be performance. Obviously this would have to
be done in a secure fashion but the stream api and privacy are not
mutually exclusive.
John do you think this will be possible ? Maybe passing some of the
oAuth credentials ?
On Aug 26, 8:18 pm, JDG wrote:
> I would hope the
It is happening on our site and I checked from one of our other sites
using curl. It gives the 200 "error" one minute and then the status
response the next. It does not matter if I request JSON it still
returns the HTML.
After a user has authenticated via oauth, the following does not work:
post_statusesUpdate(array('status' => $tweet));
?>
but the following does work even though the additional code should not
make a difference to the twitter api
get_statusesUser_timeline(array
('screen_name' => $username));
Hi!
I'm just getting started using the follow API. We got access to
"shadow" (thanks!) and I'm taking it for a spin now. I'm following
about 7k people.
Something weird I've found is that I seem to routinely get tweets from
users who were not included in my follow=parameter. I think I must be
Hi everyone
I wish to know if it's possible to get -nearly- real time timeline
updates
from my own account. I check stream.twitter.com but don't think
provides
user's timeline option.
Regards,
PD: i'm not english speaker :)
Not necessarily. See this document (which I've posted earlier on this list)
for details: http://code.google.com/p/pubsubhubbub/wiki/PublisherEfficiency
In essence, with PSHB (Pubsub Hubbub), Twitter would only have to retrieve
the latest data, add it to flat files on the server or a single column
Can we please hear something from someone at Twitter about this, it's
becoming unusable with constant XML errors
On Sep 7, 4:51 am, Naveen A wrote:
> We are seeing this HTML META REFRESH as well from our clients. We are
> a mobile application and seeing this issue more and more frequently to
> t
SUP will not work for Twitter or any other service that deals with
very large data sets.
In essence, a Twitter SUP feed would be one JSON array of all the
Twitter users who have posted a status update in the past 60 seconds.
a) The SUP feed will consistently contain a few million array entries.
Friendfeed consumes the Twitter Streaming API to update Twitter
status. SUP is not employed.
All Twitter accounts have access to the Streaming API, documented
here: http://apiwiki.twitter.com/Streaming-API-Documentation
-John Kalucki
http://twitter.com/jkalucki
Services, Twitter Inc.
On Sep 7,
I don't understand why it would be foolish. Nevertheless, if flat
files are considered archaic, then memcache dedicated to caching large
social graph id lists for several minutes would provide the same
benefits, wouldn't it?
The reason why I would prefer flat files above memcache is that you're
n
Hi all,
I've recently been doing some research on how FriendFeed manages to
push user's twitter updates to users FriendFeed profile so fast. I was
very impressed at the speed these updates were delivered to FriendFeed
and appears on my profile (within 5 seconds) so I started looking into
how it w
Either way an XML or JSON feed should NEVER return HTML!
On Sep 7, 11:25 am, Ben Eliott wrote:
> IP: 67.23.28.168, time is Europe/London
>
> 2009-09-07 11:19:48,014 - twittersearch.models - CRITICAL - Search did
> not reutrn a json object! code = 200 answer = "-//W3C//DTD HTML 4.01//EN"
> "h
IP: 67.23.28.168, time is Europe/London
2009-09-07 11:19:48,014 - twittersearch.models - CRITICAL - Search did
not reutrn a json object! code = 200 answer = "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/1999/REC-html401-19991224/strict.dtd
">
Starting to wonder whether this i
Flat file generation and maintenance would be foolish at this stage.
Seperating out the individual data sets purely for api to be served
by different clusters with server side caching may fit the bill - but
tbh if this isn't happening already I'll be shocked.
On Sep 7, 5:40 am, Jesse Stay wrote
I've opened a feature request for this in the issues database:
http://code.google.com/p/twitter-api/issues/detail?id=1011
If you like this idea and / or think it's a good thing, please
indicate your support both here and in the issues forum.
If you don't like it and / or don't think it's so hot
48 matches
Mail list logo