+1

Another case which suffers the same issue:

client A runs a query / explain plan
client B drops an index
client A's queries / explain plans fail

This might sound like a strange use cases, but when you are testing performance and examining execution plans, I found myself using a SQL GUI and building big indexes in a different sqllline session.

What I'd like is either a connection string setting which aggressively refreshes the schema or a command like MySQL's FLUSH PRIVILEGES or SET xxx ON which gets intercepted by the driver.

Andrew.

On 25/06/2014 19:43, Jody Landreneau wrote:
I have a use case where I have multiple instances of the phoenix client running. Essentially, they are taking data and performing upserts. These are on multiple machines. When I update a table schema, ie)by adding a column, the clients start failing. The update is performed via a client outside these running instances. The code that builds the upsert statement understands that an additional column was added and creates the proper upsert statement.

Thinking this was a connection cache issue, I tried setting a max time to close connections(they are in a pool). This did not work. I ended up tracing the issue and finding that there is a MetaDataImpl cache that gets populated on startup. Table schema are stored in this cache. And when something is performed like an upsert there is code in the FromCompiler that checks columns, but the columns are not updated in this internal cache.

I can file an issue on this with more details but wanted to get some insight from others as to if it were some property to pass in that can tell the cache to refresh at some interval. Possibly this should have been stored on the connections and used that connection timeout or if there was a property to pass in to the driver itself to cause a refresh.

thanks --

Reply via email to