On Jun 30, 2012, at 6:35 PM, espresso maker wrote:

> hehe. That's a very good MySQL observation. :) 
> 
> I was trying to avoid hadoop & map reduce because my data doesn't grow more 
> than ~5 million / device (appliance) .. At 5 million, I am still able to run 
> my queries at a very reasonable time and most if not all the queries need to 
> be realtime and not batched.
> 
> If I want to create a table per device. Would it be possible to bind the ORM 
> object to a table dynamically? Any sqlalchemy examples I can look it that 
> does something similar?


very poorly.   the pattern we offer is here: 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/EntityName



> 
> Thanks!
> 
> On Saturday, June 30, 2012 2:10:55 PM UTC-7, Michael Bayer wrote:
> OK well it's MySQL, so sure if you want to make a table per customer, its not 
> a terrible drain on MySQL....the "create tables on the fly" thing makes DBAs 
> very upset but then again, MySQL DBs are usually not DBA controlled...
> 
> still, it seems like these tables aren't referred to by any other tables, 
> otherwise table-per-customer would be quite unwieldy.   I have a vague 
> recollection that Solr can be used for this sort of thing too....and google 
> says yes !   in fact it's explicitly a group (well, it's rackspace!) that 
> chose a solr/hadoop solution over "partitioned MySQL":
> 
> http://highscalability.com/how-rackspace-now-uses-mapreduce-and-hadoop-query-terabytes-data
> 
> Solr is a very good product and worth looking into here.
> 
> 
> On Jun 30, 2012, at 2:20 PM, espresso maker wrote:
> 
>> 1: The logs are selected quite often based on indexed columns, and this is 
>> done via a web portal graphing tool. Maybe I shouldn't refer to them as logs 
>> for clarity, but the data is specific snmp fields where a schema fits well 
>> and the queries are fairly basic with some group by aggregation.
>> 
>> 2. I see. I was thinking of using myisam to store the "log" tables which 
>> makes dropping a table equivalent to deleting a file.  
>> 
>> 3. Anytime I wanted to delete a device, the operation took some time, and 
>> that table is read/write intensive so I am afraid that deleting 5M records 
>> would interrupt things. (using innodb)
>> 
>> 4. I am not expecting more than 5k customers. But I can imagine how managing 
>> that can be a hassle. 
>> 
>> 
>> Should I keep things the way it is, single database, single table for all 
>> device logs of all customers? Or there is a better approach I can take?
>> 
>> Thanks
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msg/sqlalchemy/-/yviCZpEc-fwJ.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> To unsubscribe from this group, send email to 
>> sqlalchemy+unsubscr...@googlegroups.com.
>> For more options, visit this group at 
>> http://groups.google.com/group/sqlalchemy?hl=en.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To view this discussion on the web visit 
> https://groups.google.com/d/msg/sqlalchemy/-/R4ZUB7CcgQMJ.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to