Ah thanks a lot! I totally missed table partitioning. I love the idea.
On Sunday, July 1, 2012 3:49:55 AM UTC-7, Wolfgang wrote:
Having a single table to store all the customer's logs didn't seem to
work because when the device is removed and the logs have to be
deleted the table gets
I like the time based partitioning idea. I am going to explore it more.
I usually try to stay away from triggers. They are hard to manage, and
debug.
You understood my intent straight away. :)
Thanks!
On Sunday, July 1, 2012 6:30:03 AM UTC-7, Vlad K. wrote:
I see two approaches here.
Having a single table to store all the customer's logs didn't seem to
work because when the device is removed and the logs have to be
deleted the table gets locked for a while.
Huh? Bad choice of DBMS/OS? Bad application design?
My question is, is this a good idea? Am I missing something?
I see two approaches here. The first is partitioning as it has been
suggested, except partitions need to be defined upfront, if I'm not
mistaken, at least in MySQL (expression + size), whereas in PostgreSQL
you need custom triggers on write operations on the main table, so I'm
not sure if
thoughts:
1. why even use a relational database for logging ? especially with 5M
rec/month/device/customer, how often is there a genuine SELECT with special
criteria for those records ?
2. im not a huge fan of table-per-record. relational schemas aren't designed
to work this way, the
1: The logs are selected quite often based on indexed columns, and this is
done via a web portal graphing tool. Maybe I shouldn't refer to them as
logs for clarity, but the data is specific snmp fields where a schema fits
well and the queries are fairly basic with some group by aggregation.
2.
OK well it's MySQL, so sure if you want to make a table per customer, its not a
terrible drain on MySQLthe create tables on the fly thing makes DBAs very
upset but then again, MySQL DBs are usually not DBA controlled...
still, it seems like these tables aren't referred to by any other
hehe. That's a very good MySQL observation. :)
I was trying to avoid hadoop map reduce because my data doesn't grow more
than ~5 million / device (appliance) .. At 5 million, I am still able to
run my queries at a very reasonable time and most if not all the queries
need to be realtime and
On Jun 30, 2012, at 6:35 PM, espresso maker wrote:
hehe. That's a very good MySQL observation. :)
I was trying to avoid hadoop map reduce because my data doesn't grow more
than ~5 million / device (appliance) .. At 5 million, I am still able to run
my queries at a very reasonable time
Hi there,
I'd like to get some input on the following design approach.
I am creating a service where I have to save the customer's device logs.
Every customer can have more than one device and every device can have as
many as 5M records over a period of month or so.
Having a single table to
10 matches
Mail list logo