Quick example:
xof=# CREATE TABLE t1 (id SERIAL PRIMARY KEY);
CREATE TABLE
xof=# CREATE TABLE t2 (id INTEGER PRIMARY KEY GENERATED ALWAYS AS IDENTITY);
CREATE TABLE
xof=# \d+
List of relations
Schema | Name| Type | Owner | Persistence | Access meth
> I agree it’s a documentation bug
Thanks for confirmation, then maybe I can start a new thread in
pgsql-hackers about this bug and I can myself create a patch for the same.
> then the regression test should be fixed as well
I will add regress test for sequences as well.
We can remove *SE
tes that pgstattuple supports relations, toast tables,
>> materialized views, and sequences.
>> However, when I executed a query with a sequence, I encountered the
>> following error:
>>
>> postgres=> CREATE SEQUENCE serial START 101;
>> CREATE SEQUENCE
>>
ter/contrib/pgstattuple/pgstattuple.c#L255-L259>.
It indicates that pgstattuple supports relations, toast tables,
materialized views, and sequences.
However, when I executed a query with a sequence, I encountered the
following error:
postgres=> CREATE SEQUENCE serial START 101;
CREATE
s relations, toast tables,
> materialized views, and sequences.
> However, when I executed a query with a sequence, I encountered the
> following error:
>
> postgres=> CREATE SEQUENCE serial START 101;
> CREATE SEQUENCE
> postgres=> SELECT * FROM pgstattuple('serial
On Mon, Jun 24, 2024 at 7:09 AM Ayush Vatsa
wrote:
[snip]
> How can one use pgstattuple on sequences?
>
Out of curiosity... *why*?
Hi
I dont think Sequences are support, please refer to the following
documentation.
https://www.postgresql.org/docs/current/pgstattuple.html
Regards
Kashif Zeeshan
On Mon, Jun 24, 2024 at 4:09 PM Ayush Vatsa
wrote:
> Hi PostgreSQL Community,
>
> I was recently exploring the pgstatt
sequences.
However, when I executed a query with a sequence, I encountered the
following error:
postgres=> CREATE SEQUENCE serial START 101;
CREATE SEQUENCE
postgres=> SELECT * FROM pgstattuple('serial');
ERROR: only heap AM is supported
postgres=>
It got stuck in this i
Correction: the trigger on the remote table isn't going to work. I was trying
so many different things today that I confused myself. Sorry...
On the bright side the remote sequence works great and I can insert records
from the my database to the remote database now which is a step forward.
I
Thanks Tom,
Your reply helped point me in the right direction. With a little trial and
error I came up with a hack to solve my issue.
First off, I create a shared sequence for the user_id with a technique
described here:
[
https://paquier.xyz/postgresql-2/global-sequences-with
"peter.boris...@kartographia.com" writes:
> I guess the FDW is not "seeing" the default value for status and the
> bigserial sequence for the id column. Is there anyway around this?
The idea is that you should attach default clauses to the foreign
table definition. A remote serial column is a h
Hello,
I have a few questions inserting data using Foreign Data Wrappers (FDW).
Consider this simple example.
On PostgreSQL Database A (remote):
CREATE TABLE APP.TEST (
ID BIGSERIAL NOT NULL,
FIRST_NAME text,
LAST_NAME text,
STATUS integer NOT NULL DEFAULT 1,
CONSTRAINT
On Sun, Oct 29, 2023, 13:59 Ravi Malghan wrote:
> I have a master sql file used to create all the database objects for an
> application. Is there an application/script to compare what's in the actual
> database and my sql file and display the differences?
>
Not that I know of. You can find stuf
I have a master sql file used to create all the database objects for an
application. Is there an application/script to compare what's in the actual
database and my sql file and display the differences?
> --- example: lets get 3 sequences
>
> select next_sequence(), next_sequence(), next_sequence();
> --- inspect the table to see what happned
> select * from my_sequence;
Thanks for your input on this issue.
SQLP
Sent with ProtonMail Secure Email.
Thanks. I was sort of expecting that answer but I didn't see where it
was addressed specifically. Unfortunately I'm stuck on v12 for the
time being so I guess it's back to the workaround.
On Tue, Sep 28, 2021 at 2:13 PM Tom Lane wrote:
>
> Jeff Hoffmann writes:
> > I am using postgresql-12.8.
Jeff Hoffmann writes:
> I am using postgresql-12.8. I am using I am making use of an identity
> column for part of a scripts to process some updated data. Because of
> the way the script is called I don't necessarily know if this column
> is going to exist in the table I am working on so I have
I am using postgresql-12.8. I am using I am making use of an identity
column for part of a scripts to process some updated data. Because of
the way the script is called I don't necessarily know if this column
is going to exist in the table I am working on so I have a step that
will conditionally
Hi,
I have a use case where I need to create sequences concurrently. I use
sequences instead of a table with counters to keep lock contention for
incrementing the id low. The creation of new sequences rarely occurs during
normal application operation.
I check via
On Mon, Mar 23, 2020 at 9:58 AM Daniel Verite
wrote:
> pabloa98 wrote:
>
> > When I have a medium number of sequence I will report how it behaves. It
> > will take some time though.
>
> Be aware that creating the sequences on the fly has the kind of race
> c
pabloa98 wrote:
> When I have a medium number of sequence I will report how it behaves. It
> will take some time though.
Be aware that creating the sequences on the fly has the kind of race
condition that you wanted to avoid in the first place.
For instance consider this execut
format('select nextval("obj_counter_%s_%s");', i_group, i_element)
> into strict c_seqval;
>
> or, more paranoidly:
>
> execute format('select nextval(%I);', format('obj_counter_%s_%s', i_group,
> i_element)) into strict c_seqval;
>
>
On Sun, Mar 22, 2020 at 5:36 PM Christopher Browne
wrote:
> On Sun, 22 Mar 2020 at 17:54, pabloa98 wrote:
>
>>
>> So the question may actually be:
>>>
>>> How do we improve our locking code, so we don't have to spawn millions
>>> of sequences
On Sun, Mar 22, 2020 at 5:36 PM Christopher Browne
wrote:
>
> Then, on any of the tables where you need to assign sequence values, you'd
> need to run an "after" trigger to do the assignment. The function that
> finds the sequence value is kind of analagous:
> create or replace function get_next
On Sun, 22 Mar 2020 at 17:54, pabloa98 wrote:
>
> So the question may actually be:
>>
>> How do we improve our locking code, so we don't have to spawn millions
>> of sequences?
>>
>> What is the locking method you are using?
>>
>
> I am not u
olve those problems, then
> to create a whole new set of problems by using millions of sequences.
>
> What we are doing in the **OTHER section** with row locks is basically:
BEGIN;
...
x = SELECT last_value + 1 FROM counter WHERE group = ... AND element =
FOR UPDATE;
...
if everythin
On 3/22/20 2:53 PM, pabloa98 wrote:
So the question may actually be:
How do we improve our locking code, so we don't have to spawn millions
of sequences?
What is the locking method you are using?
I am not using locking with the million sequence solution. I do not
> So the question may actually be:
>
> How do we improve our locking code, so we don't have to spawn millions
> of sequences?
>
> What is the locking method you are using?
>
I am not using locking with the million sequence solution. I do not want
something that
check digit.
-
18 digits for item identifier.
So the question may actually be:
How do we improve our locking code, so we don't have to spawn millions
of sequences?
What is the locking method you are using?
The lock part is because we solved a similar problem with a counter by
row lo
of various
> solutions - and maybe using a sequence is the best (or least bad)
> solution.
>
> OK. In that case, I will proceed with the millions of sequences. We will see.
Nonononononononono! (channelling Jean-Claude Laprie[1])
I was trying to say that in each case one shou
On 2020-03-21 14:51:35 -0600, Rob Sargent wrote:
> > On Mar 21, 2020, at 1:13 PM, Peter J. Holzer wrote:
> >
> > On 2020-03-21 12:55:33 -0600, Rob Sargent wrote:
> >> To me the description of the ID smacks of database-in-the-name folly. I
> >> recognize that OP is likely unable to take another pa
ave
> an implicit sequence attached to it and the column in new rows will
> automatically have values from the sequence assigned to it.
> "
>
> What you want is a built in method to pull from different sequences
> depending on the value of another column or values from multiple columns
> in the same table.
>
> Exactly. That will work fine too.
es of various
solutions - and maybe using a sequence is the best (or least bad)
solution.
OK. In that case, I will proceed with the millions of sequences. We will
see.
Anyway, It will be awesome if we have a sequence data type in a future
version of postgresql. They will solve a lot of prob
> On Mar 21, 2020, at 1:13 PM, Peter J. Holzer wrote:
>
> On 2020-03-21 12:55:33 -0600, Rob Sargent wrote:
>> To me the description of the ID smacks of database-in-the-name folly. I
>> recognize that OP is likely unable to take another path. I’ll not push this
>> any
>> further.
>
> Identif
the best (or least bad)
> solution.
>
>
OK. In that case, I will proceed with the millions of sequences. We will
see.
Anyway, It will be awesome if we have a sequence data type in a future
version of postgresql. They will solve a lot of problems similar to this
one.
Pablo
On 2020-03-21 12:55:33 -0600, Rob Sargent wrote:
> To me the description of the ID smacks of database-in-the-name folly. I
> recognize that OP is likely unable to take another path. I’ll not push this
> any
> further.
Identifiers often have internal structure. In Austria for example, the
social
te:
> > > > > First, it sounds like you care about there being no gaps in the
> > > > > records you end
> > > > > up saving. If that is the case then sequences will not work for you.
> > > >
> > > > I think (but I would love to b
> On Mar 21, 2020, at 12:18 PM, pabloa98 wrote:
>
>
>
>> Why? "Print" and "screen" forms have all sorts of practical restrictions
>> like this.
>>
> Legacy I guess. These are all digital stuff. But the final result is an
> identifier that people can read and realize what they are talking
> Why? "Print" and "screen" forms have all sorts of practical restrictions
> like this.
>
> Legacy I guess. These are all digital stuff. But the final result is an
identifier that people can read and realize what they are talking about.
Pablo
On Fri, Mar 20, 2020 at 9:04 PM John W Higgins wrote:
>
>
> On Fri, Mar 20, 2020 at 8:13 PM pabloa98 wrote:
>
>>
>> I hope I described the problem completely.
>>
>>
> 1) What is a group - does it exist prior to records being inserted? How
> many groups are you working with? How long do they live
> > As to below that is going to require more thought.
> >
> Still no word on the actual requirement. As someone who believes
> consecutive numbers on digital invoices is simply a mistaken interpretation
> of the paper based system, I suspect a similar error here. But again we
> haven’t really hear
On 3/21/20 12:02 PM, Rob Sargent wrote:
On Mar 21, 2020, at 10:47 AM, Adrian Klaver wrote:
On 3/20/20 8:13 PM, pabloa98 wrote:
Nothing I saw that said int could not become bigint.
My bad. The code cannot be a bigint. Or it could be a bigint between 1 to
:)
Aah, that was the cou
> On Mar 21, 2020, at 10:47 AM, Adrian Klaver wrote:
>
> On 3/20/20 8:13 PM, pabloa98 wrote:
>>Nothing I saw that said int could not become bigint.
>> My bad. The code cannot be a bigint. Or it could be a bigint between 1 to
>> :)
>
>
> Aah, that was the counter Peter was talk
there will
be more operations in other tables using the same transaction) and I
thought that creating/dropping sequences could be a solution. But I was
not sure. I am not sure how it will behave with millions of sequences.
If there is another solution different than millions of sequences that
do
On Fri, Mar 20, 2020 at 8:13 PM pabloa98 wrote:
>
> I hope I described the problem completely.
>
>
1) What is a group - does it exist prior to records being inserted? How
many groups are you working with? How long do they live for?
2) #1 but for element
3) How many records per second per group pe
On Friday, March 20, 2020, pabloa98 wrote:
>
> If there is another solution different than millions of sequences that do
>> not block, generate few gaps (and those gaps are small) and never generate
>> collisions then I will happily use it.
>
>
You are going to have
siness rules.
Regarding to the implementation of this. Our concern is to choose something
not slow (it does not need to be super fast because there will be more
operations in other tables using the same transaction) and I thought that
creating/dropping sequences could be a solution. But I was not sure. I
On Fri, Mar 20, 2020 at 3:59 PM Peter J. Holzer wrote:
> On 2020-03-19 16:48:19 -0700, David G. Johnston wrote:
> > First, it sounds like you care about there being no gaps in the records
> you end
> > up saving. If that is the case then sequences will not work for you.
>
&
t; INSERT INTO event(group, element, code)
>> VALUES (
>> 1,
>> 1,
>> ( SELECT seq.nextval('code_1_1_seq') FROM pair p WHERE
>> p.group=1 and p.code=1 )
>> );
>>
>> Or perhaps storing
. If that is the case then sequences will not work for you.
I think (but I would love to be proven wrong), that *nothing* will work
reliably, if
1) you need gapless numbers which are strictly allocated in sequence
2) you have transactions
3) you don't want to block
Rationale:
Regardless of ho
> On Mar 20, 2020, at 5:29 PM, Peter J. Holzer wrote:
>
>
>>
>>
>> OP has said small gaps are ok.
>
> Yes. This wasn't a response to the OP's requirements, but to David's
> (rather knee-jerk, IMHO) "don't use sequences&
up saving. If that is the case then sequences will not work for you.
> >
> > I think (but I would love to be proven wrong), that *nothing* will work
> > reliably, if
> >
> > 1) you need gapless numbers which are strictly allocated in sequence
> > 2)
> On Mar 20, 2020, at 4:59 PM, Peter J. Holzer wrote:
>
> On 2020-03-19 16:48:19 -0700, David G. Johnston wrote:
>> First, it sounds like you care about there being no gaps in the records you
>> end
>> up saving. If that is the case then sequences will not work f
On 2020-03-19 16:48:19 -0700, David G. Johnston wrote:
> First, it sounds like you care about there being no gaps in the records you
> end
> up saving. If that is the case then sequences will not work for you.
I think (but I would love to be proven wrong), that *nothing* will work
rel
t;>MAXVALUE
> >> NOT NULL,
> >> CONSTRAINT PRIMARY KEY (group, element)
> >> );
> >>
> >> And then:
> >>
> >> INSERT INTO event(group, element, code)
> >> VALUES (
> >>
tion so it performs as a
> > sequence without having race conditions between concurrent
> > transactions?
> >
> > The other option is to create sequences for each new pair of (group,
> > element) using triggers. There are millions of pairs. So this
> > approach will g
p, element)
);
And then:
INSERT INTO event(group, element, code)
VALUES (
1,
1,
( SELECT seq.nextval('code_1_1_seq') FROM pair p WHERE
p.group=1 and p.code=1 )
);
Or perhaps storing all the sequences in the same table as rows will
have the same behavio
vent(group, element, code)
VALUES (
1,
1,
( SELECT seq.nextval('code_1_1_seq') FROM pair p WHERE
p.group=1 and p.code=1 )
);
Or perhaps storing all the sequences in the same table as rows will have
the same behavior.
If code is just something to show the s
e behavior. Is that
> possible by using a table like "counter" table, where the counter
> could be increased out of the transaction so it performs as a
> sequence without having race conditions between concurrent
> transactions?
>
> The other option is to create sequences for
On Thursday, March 19, 2020, pabloa98 wrote:
>
>
>> We will use a CACHE 1. This is because when nextval('seq') is invoked, we
> are hitting 3 or 4 more tables so the sequence will not be a performance
> blocker (compared with all the operations in the transaction).
>
The other implementation deta
On Thursday, March 19, 2020, Tom Lane wrote:
> Michael Lewis writes:
> > On Thu, Mar 19, 2020, 5:48 PM David G. Johnston <
> david.g.johns...@gmail.com>
> > wrote:
> >> However, one other consideration with sequences: do you care that
> >> PostgreS
1,
( SELECT seq.nextval('code_1_1_seq') FROM pair p WHERE p.group=1
and p.code=1 )
);
Or perhaps storing all the sequences in the same table as rows will have
the same behavior.
Pablo
On Thu, Mar 19, 2020 at 7:56 PM Tom Lane wrote:
> Michael Lewis writes:
> >
On Thu, Mar 19, 2020 at 9:12 PM Adrian Klaver
wrote:
> On 3/19/20 7:38 PM, Michael Lewis wrote:
> >
> >
> > On Thu, Mar 19, 2020, 5:48 PM David G. Johnston
> > mailto:david.g.johns...@gmail.com>> wrote:
> >
> > However, one other c
On 3/19/20 7:38 PM, Michael Lewis wrote:
On Thu, Mar 19, 2020, 5:48 PM David G. Johnston
mailto:david.g.johns...@gmail.com>> wrote:
However, one other consideration with sequences: do you care that
PostgreSQL will cache/pin (i.e., no release) every single sequence
you tou
Michael Lewis writes:
> On Thu, Mar 19, 2020, 5:48 PM David G. Johnston
> wrote:
>> However, one other consideration with sequences: do you care that
>> PostgreSQL will cache/pin (i.e., no release) every single sequence you
>> touch for the lifetime of the session?
On Thu, Mar 19, 2020, 5:48 PM David G. Johnston
wrote:
> However, one other consideration with sequences: do you care that
> PostgreSQL will cache/pin (i.e., no release) every single sequence you
> touch for the lifetime of the session? (I do not think DISCARD matters here
>
nt INT NOT NULL,
>> code INT NOT NULL,
>> CONSTRAINT PRIMARY KEY(code, element, group)
>> );
>>
> Unless event table is searched by code more than group, you probably want to
> maintain the key order from the pair table's primary key.
>
> OK. I will do that.
>
> If gaps are ok do you still near multiple sequences?
>
> I need to start each "code" value from 1 in each (group, element) pair. This
> is because we only have codes for each pair. We do not want to waste
> numbers. If there is a gap is OK but no a gap of millions of numbers.
nt, group)
> );
>
> Unless event table is searched by code more than group, you probably want
> to maintain the key order from the pair table's primary key.
>
OK. I will do that.
>
> If gaps are ok do you still near multiple sequences?
>
> I need to start each "c
T NOT NULL,
> CONSTRAINT PRIMARY KEY(code, element, group)
> );
>
Unless event table is searched by code more than group, you probably want to
maintain the key order from the pair table's primary key.
If gaps are ok do you still near multiple sequences?
"counter", I could still have counter
> >> collisions between 2 transactions. I need truly sequence
> >> behavior. Is that possible by using a table like "counter"
> >> table, where the counter could be increased out of the
> &g
ld be increased out of the
transaction so it performs as a sequence without having race
conditions between concurrent transactions?
>
> The other option is to create sequences for each new pair of
(group, element) using triggers. There are millions of pair
> used for nothing else. When we insert millions of pairs group/element, the
> trigger in that table will generate millions of sequences.
>
> My question is how PostgreSQL will behave. Could it deal with millions of
> sequences? What about system operations as vacuum, etc?
>
>
Fi
s
>> between 2 transactions. I need truly sequence behavior. Is that possible by
>> using a table like "counter" table, where the counter could be increased
>> out of the transaction so it performs as a sequence without having race
>> conditions between concurre
truly sequence behavior. Is that possible by using a
> > table like "counter" table, where the counter could be increased out of the
> > transaction so it performs as a sequence without having race conditions
> > between concurrent transactions?
> >
>
> > If I use a table "counter", I could still have counter collisions
> between 2 transactions. I need truly sequence behavior. Is that possible by
> using a table like "counter" table, where the counter could be increased
> out of the transaction so it performs as
avior. Is that possible by using a
> table like "counter" table, where the counter could be increased out of the
> transaction so it performs as a sequence without having race conditions
> between concurrent transactions?
>
> The other option is to create sequences for
rforms as a sequence without having race conditions
between concurrent transactions?
The other option is to create sequences for each new pair of (group,
element) using triggers. There are millions of pairs. So this approach will
generate millions of sequences.
How a PostgreSQL database would behav
Oh it is the one we are working on.
One of my team members brought up this issue from a job where we worked on
a vendor designed one.
I am convince we do not have an issue now.
Thanks for your expertise.
On Tue, Oct 22, 2019 at 4:42 PM Adrian Klaver
wrote:
> On 10/22/19 1:35 PM, stan wrote:
On 10/22/19 1:35 PM, stan wrote:
On Tue, Oct 22, 2019 at 12:48:41PM -0700, Adrian Klaver wrote:
On 10/22/19 10:48 AM, stan wrote:
Please reply to list also:
Ccing list.
Sorry if my description was not clear.
No, we do not mix test, and production data. Let me try to clarify the
question. L
On Tue, Oct 22, 2019 at 12:48:41PM -0700, Adrian Klaver wrote:
> On 10/22/19 10:48 AM, stan wrote:
> Please reply to list also:
> Ccing list.
>
>
> >
> > Sorry if my description was not clear.
> >
> > No, we do not mix test, and production data. Let me try to clarify the
> > question. Looking
On 10/22/19 10:48 AM, stan wrote:
Please reply to list also:
Ccing list.
Sorry if my description was not clear.
No, we do not mix test, and production data. Let me try to clarify the
question. Looking at a pg_dump, I see the following:
CREATE SEQUENCE public.customer_key_serial
START
dencies on keys developed from sequences after a
backup/restore cycle,
Will I potentially have these issues? If so, what can I do different to
avoid this being an issue?
It is not clear to me what you are doing:
1) Are you using pg_dump/pg_restore to populate a database?
If so it will take c
pendencies in the correct order, so that the keys later
structures depend on already exist.
Today I was going over the design for the current project with a friend,
whose expertise i respect. he said that he had issues on a system designed
by an OEM that had dependencies on keys developed from seque
Karl Martin Skoldebrand schrieb am 02.07.2019 um 13:20:
>>> Now I want to insert more countries, between France and Ireland.
>>
>> That is a wrong assumption - there is no "between" for rows in a
>> relational database.
>>
>
> Yes, I'm fairly aware of this. However the application the
> databas
> Yes, I'm fairly aware of this. However the application the database table
> belongs to seems to rely on a specific order in the database. I.e. if I just
> add value to the table they end up, possibly due to how the application is
> coded in an undesireable spot on the web page.
> That is why
On Tue, Jul 02, 2019 at 11:20:42AM +, Karl Martin Skoldebrand wrote:
> Yes, I'm fairly aware of this. However the application the database table
> belongs to seems to rely on a specific order in the database. I.e. if I just
> add value to the table they end up, possibly due to how the applic
Karl Martin Skoldebrand schrieb am 02.07.2019 um 12:44:
> Looking more at sequences, I have some sequences that start with ID 1,
> incrementing each record by 1.
>
> So I have e.g. 1 Spain 2. Germany 3. France 4. Ireland 5. Norway
>
> Now I want to insert more countries,
Karl Martin Skoldebrand schrieb am 02.07.2019 um 12:44:
> Looking more at sequences, I have some sequences that start with ID 1,
> incrementing each record by 1.
>
> So I have e.g. 1 Spain 2. Germany 3. France 4. Ireland 5. Norway
>
> Now I want to insert more countries,
Hi again,
Looking more at sequences, I have some sequences that start with ID 1,
incrementing each record by 1.
So I have e.g. 1 Spain 2. Germany 3. France 4. Ireland 5. Norway
Now I want to insert more countries, between France and Ireland. And also alter
the increment. That latter part might
Thanks for replies.
I looked at some tutorial page and there were more instructions in that, so got
confused. I was aware sequences create integers.
//Martin S
There's nothing really "in" a sequence, it just generates numbers for
you, generally for unintelligent primary keys. Those statements
perfectly create new sequences which will start from 1.
Tim Clarke
IT Director
Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420
On 24/06/2
Hi Karl
I did not double check with the doc whether the SQL you posted is
valid (I guess it could by applying the defaults) however I do not see
how sequences would govern the sending of data to users.
Kind regards
Thiemo
Quoting Karl Martin Skoldebrand :
Hi,
I'm tryi
Hi,
I'm trying to troubleshoot a case where users are getting data from a database,
despite they having said they are not interested.
There is a number of sequences in the database in question, but to me they look
"empty", like stubs.
create sequence db_table_seq;
alter sequen
On 03/09/2018 02:22 PM, Adrian Klaver wrote:
On 03/09/2018 12:08 PM, Charlin Barak wrote:
Hi,
I can find sequences owned by a schema from
information_schema.sequences and pg_class but how do I find sequences
owned by a user/role? What pg table should I be looking at?
https
On 03/09/2018 12:08 PM, Charlin Barak wrote:
Hi,
I can find sequences owned by a schema from information_schema.sequences
and pg_class but how do I find sequences owned by a user/role? What pg
table should I be looking at?
https://www.postgresql.org/docs/10/static/view-pg-sequences.html
Hi,
I can find sequences owned by a schema from information_schema.sequences
and pg_class but how do I find sequences owned by a user/role? What pg
table should I be looking at?
Thanks.
Charlin
96 matches
Mail list logo