On Sat, 20 Aug 2016, 2:00 a.m. Andy Colson, wrote:
> On 8/19/2016 2:32 AM, Thomas Güttler wrote:
> > I want to store logs in a simple table.
> >
> > Here my columns:
> >
> > Primary-key (auto generated)
> > timestamp
> > host
> > service-on-host
> > loglevel
> > msg
> > json (option
On 19/08/16 10:57, Thomas Güttler wrote:
What do you think?
I store most of my logs in flat textfiles syslog style, and use grep for adhoc
querying.
200K rows/day, thats 1.4 million/week, 6 million/month, pretty soon you're
talking big tables.
in fact thats several rows/second on a 24/7
On 8/19/2016 2:32 AM, Thomas Güttler wrote:
I want to store logs in a simple table.
Here my columns:
Primary-key (auto generated)
timestamp
host
service-on-host
loglevel
msg
json (optional)
I am unsure which DB to choose: Postgres, ElasticSearch or ...?
We don't have high traffi
On Sat, Aug 20, 2016 at 1:13 AM, Francisco Olarte
wrote:
> Hi Victor:
>
>
> On Fri, Aug 19, 2016 at 7:02 PM, Victor Blomqvist wrote:
> > What I want to avoid is my query visiting the whole 1m rows to get a
> result,
> > because in my real table that can take 100sec. At the same time I want
> the
On Fri, Aug 19, 2016 at 2:32 AM, Thomas Güttler
wrote:
> I want to store logs in a simple table.
>
> Here my columns:
>
> Primary-key (auto generated)
> timestamp
> host
> service-on-host
> loglevel
> msg
> json (optional)
>
> I am unsure which DB to choose: Postgres, ElasticSearch o
Hi Victor:
On Fri, Aug 19, 2016 at 7:02 PM, Victor Blomqvist wrote:
> What I want to avoid is my query visiting the whole 1m rows to get a result,
> because in my real table that can take 100sec. At the same time I want the
> queries that only need to visit 1k rows finish quickly, and the querie
On Fri, Aug 19, 2016 at 3:20 PM, Daniel Verite wrote:
> There's a simple technique that works on top of a Feistel network,
> called the cycle-walking cipher. Described for instance at:
> http://web.cs.ucdavis.edu/~rogaway/papers/subset.pdf
> I'm using the opportunity to add a wiki page:
> https://
On Fri, Aug 19, 2016 at 6:01 PM, Francisco Olarte
wrote:
> Hi Victor:
>
> On Fri, Aug 19, 2016 at 7:06 AM, Victor Blomqvist wrote:
> > Is it possible to break/limit a query so that it returns whatever results
> > found after having checked X amount of rows in a index scan?
> >
> > For example:
>
On 8/19/2016 3:44 AM, Andreas Kretschmer wrote:
So, in your case, consider partitioning, maybe per month. So you can
also avoid mess with table and index bloat.
with his 6 week retention, i'd partition by week.
--
john r pierce, recycling bits in santa cruz
--
Sent via pgsql-general maili
Francisco Olarte wrote:
> I think there are some pseudo-random number generators which
> can be made to work with any range, but do not recall which ones right
> now.
There's a simple technique that works on top of a Feistel network,
called the cycle-walking cipher. Described for instance
Am 19.08.2016 um 12:44 schrieb Andreas Kretschmer:
Thomas Güttler wrote:
How will you be using the logs? What kind of queries? What kind of searches?
Correlating events and logs from various sources could be really easy with
joins, count and summary operations.
Wishes raise with possibili
On Fri, Aug 19, 2016 at 12:44 PM, Andreas Kretschmer
wrote:
> for append-only tables like this consider 9.5 and BRIN-Indexes for
> timestamp-searches. But if you deletes after N weeks BRIN shouldn't work
> properly because of vacuum and re-use of space within the table.
> Do you know BRIN?
>
> So,
Thomas Güttler wrote:
>> How will you be using the logs? What kind of queries? What kind of searches?
>> Correlating events and logs from various sources could be really easy with
>> joins, count and summary operations.
>
> Wishes raise with possibilities. First I want to do simple queries about
W dniu 19.08.2016 o 10:57, Thomas Güttler pisze:
>
>
> Am 19.08.2016 um 09:42 schrieb John R Pierce:
[-]
>> in fact thats several rows/second on a 24/7 basis
>
> There is no need to store them more then 6 weeks in my current use case.
>
> I think indexing in postgres is much faste
Am 19.08.2016 um 11:21 schrieb Sameer Kumar:
On Fri, Aug 19, 2016 at 4:58 PM Thomas Güttler mailto:guettl...@thomas-guettler.de>> wrote:
Am 19.08.2016 um 09:42 schrieb John R Pierce:
> On 8/19/2016 12:32 AM, Thomas Güttler wrote:
>> What do you think?
>
> I store most o
Hi Victor:
On Fri, Aug 19, 2016 at 7:06 AM, Victor Blomqvist wrote:
> Is it possible to break/limit a query so that it returns whatever results
> found after having checked X amount of rows in a index scan?
>
> For example:
> create table a(id int primary key);
> insert into a select * from gener
On Fri, Aug 19, 2016 at 4:58 PM Thomas Güttler
wrote:
>
>
> Am 19.08.2016 um 09:42 schrieb John R Pierce:
> > On 8/19/2016 12:32 AM, Thomas Güttler wrote:
> >> What do you think?
> >
> > I store most of my logs in flat textfiles syslog style, and use grep for
> adhoc querying.
> >
> > 200K rows/
On Fri, Aug 19, 2016 at 2:25 PM Victor Blomqvist wrote:
> On Fri, Aug 19, 2016 at 1:31 PM, Sameer Kumar
> wrote:
>
>>
>>
>> On Fri, 19 Aug 2016, 1:07 p.m. Victor Blomqvist, wrote:
>>
>>> Hi,
>>>
>>> Is it possible to break/limit a query so that it returns whatever
>>> results found after having
Am 19.08.2016 um 09:42 schrieb John R Pierce:
On 8/19/2016 12:32 AM, Thomas Güttler wrote:
What do you think?
I store most of my logs in flat textfiles syslog style, and use grep for adhoc
querying.
200K rows/day, thats 1.4 million/week, 6 million/month, pretty soon you're
talking big ta
On 8/19/2016 12:32 AM, Thomas Güttler wrote:
What do you think?
I store most of my logs in flat textfiles syslog style, and use grep for
adhoc querying.
200K rows/day, thats 1.4 million/week, 6 million/month, pretty soon
you're talking big tables.
in fact thats several rows/second on a 2
I want to store logs in a simple table.
Here my columns:
Primary-key (auto generated)
timestamp
host
service-on-host
loglevel
msg
json (optional)
I am unsure which DB to choose: Postgres, ElasticSearch or ...?
We don't have high traffic. About 200k rows per day.
My heart beats f
21 matches
Mail list logo