On Sun, Sep 29, 2013 at 08:20:02PM +0100, Tim Bunce wrote:
> Has anyone got any experience with recent version of Test::Database?
> 
> I'm wondering how it would integrate with DBIT and what extensions might
> be needed.

We discussed this on #dbi irc around 30-09-2013. I've copied a (lightly
edited) transcript here for the record.

Below that I've written up some thoughts on how Test::Database (TDB) fits
into DBI::Test (DBIT).


[08:26am] ribasushi: we can not rely on the ability of a user to 'create 
database' period, unless we want to limit the ability to run tests to server 
admins
[08:26am] mje: I know of no way via ODBC and SQLGetInfo to work out how to 
create a database

[08:27am] ribasushi: I guess the real question is "what is the end-user scope"
[08:27am] [Sno]: the question regarding that is "who is end-user"? 
[08:27am] ribasushi: that is do we want to be able to tell a client "hey, run 
this magic on your regular DSN, it will tll us if your DBD layer is sane"
[08:28am] ribasushi: or do we want to have to find the version in question and 
install it locally with admin rights, so we can test it for sanity
[08:28am] mje: I'm not sure permissions need to matter so much as DBIT is for 
DBD devs and DBI right now and they should have admin - but depends on later 
scope
[08:29am] mje: I know many places where even create table is not allowed
[08:29am] ribasushi: mje: it is not only for DBD devs
[08:29am] [Sno]: we want to address (and attract) a lot of stake-holders
[08:30am] [Sno]: so this question is important and maybe we have to find a way 
how to handle it
[08:30am] ribasushi: the whole point (as I understood it) is that an end user 
can set an extra envvar or something, run `cpanm DBD::MyDriver` to get the 
latest version, and the entire test stack pertinent to DBD::MyDriver (including 
tests passed down from DBI core etc) are executed, so that the user is 
confident the stack is still whole
[08:31am] ribasushi: this wasn't directly discussed in the goals emails, it is 
a good idea to clarify if we want to have the *possibility* of that
[08:31am] ribasushi: if we go with create database - this is straight out
[08:31am] [Sno]: in a lot of cases, yes we're out then

[08:31am] BooK: given that TDB has a config file, that's not a bad place to 
write down that kind of information
[08:32am] [Sno]: probably we only have to define a way how to deal with those 
kind of extra information
[08:33am] [Sno]: create table is only one thing - we might be able to do 
"create table" but not to "create database" so we need to know where we have a 
name clash
[08:33am] mje: the advantage of create database is obvious - can be created and 
torn down without polluting the system but I think you need to cope with not 
having create database - for permissions and achievability
[08:34am] ribasushi: create table is *generally* available on a stock install
[08:34am] [Sno]: yep, a kind of name prefix
[08:34am] mje: ribasushi, agreed

[08:34am] ribasushi: if a user has no perms to create table - then it is very 
likely the user should not be running DBIT on this server in the first place
[08:34am] ribasushi: as it may be critical prod. or somesuch
[08:36am] [Sno]: but we need to deal with table name prefixes in existing 
databases either
[08:36am] mje: [Sno], what do you mean by "table name prefixes"
[08:36am] ribasushi: this will have to happen per driver
[08:37am] ribasushi: some engines (hello Oracle) do not even support _ as a 
start of identifier
[08:37am] [Sno]: instead of "create table foo" do a "create table ${prefix}foo"
[08:37am] ribasushi: other engines (Oracle, we meet again) have laughable 
identifier limits (30 chars)
[08:37am] [Sno]: this should probably part of the TDB/DSNP plugin to decide how 
such a prefix looks
[08:38am] ribasushi: hence <ribasushi> this will have to happen per driver
[08:38am] mje: get_info on a DBD should tell you max table name length
[08:38am] [Sno]: yes, but driver is some kind of unspecific at this point 
[08:38am] ribasushi: sorry, when I say driver I mean it in DBIC sense, this is 
of course incorrect
[08:39am] ribasushi: s/driver/storage backend/
[08:39am] [Sno]: ack
[08:40am] ribasushi: in the DBIC sense "driver" actually means "particular 
RDBMS version with a particular connection path to it", which for the use-case 
of DBIT is overkill
[08:40am] [Sno]: it might be suitable to have some kind of miniaturized 
template engine (as for the Test::SpreadVariantsAround) which can fulfill some 
kind of dynamic keyword replacements
[08:40am] BooK: TDB basically has a basename and sticks a number at the end
[08:42am] [Sno]: BooK: when I understood ribasushi correctly, he has some test 
workflows which depend on each other in several Operating System processes - so 
the number has to be shared there
[08:42am] BooK: TDB maintains a mapping of cwd -> dbh
[08:43am] BooK: assuming a test suite is run from the extracted dist dir
[08:43am] [Sno]: but other test workflows should be parallizable - so some kind 
of $pid is required
[08:43am] BooK: and also, maybe the db instance is shared between test servers, 
so we don't want then to get the same dbh
[08:44am] BooK: in TDB I added a "key" in the config file, as a way to manually 
distinguish servers
[08:45am] BooK: so the dsn "name" is basically tdd_$driver_$login_$key atm
[08:45am] BooK: plus a number

[08:49am] BooK: what's workflow for tests?
[08:49am] BooK: does a test describe what it needs, and only run if it gets it?
[08:50am] BooK: what if the system has more than one available ? should the 
tests be looped over all available matching things ?
[08:52am] BooK: one of the assumptions of TDB is that you say I need handles 
matching this query (DBD, version, etc), and then loop over the results with a 
generic test
[08:52am] BooK:        
https://metacpan.org/source/BOOK/Test-Database-1.11/t/25-sql.t    

[08:56am] [Sno]: I would assume that one *.t creates the required table, next n 
*.t do something there (fetch, join, update, delete) and finally one cleans up
[08:56am] BooK: so the looping over drivers/dbh is done outside of the .t files
[08:57am] BooK: makes it easier to write them too

[08:57am] [Sno]: DBIT currently is designed to define how to loop during test 
file population (configure stage)
[08:58am] BooK: that loop could very well be a .t file itself, that picks all 
the files in a subdirectory and runs them as subtests
[08:58am] [Sno]: yep
[08:59am] [Sno]: I miss timbunce_ a bit - this is something which should be put 
to Reqs/Goals
[08:59am] BooK: and then the looping test script can be a one-liner, and the 
test subdirs have a specific naming scheme

[09:02am] [Sno]: from my perspective everyone is sometimes a user - and 
recognizing the complexity of DBIT, we all are depending on Point of View
[09:03am] ribasushi: I meant more like were do we draw a line - is "no _ in 
table names" acceptable? is "create database must be available" acceptable?
[09:03am] ribasushi: things like that
[09:04am] [Tux]: "create database" will not be available everywhere
[09:04am] [Tux]: and it implies having rights to create databases
[09:05am] ribasushi: ... itym tables in the second sentence...?
[09:06am] mje: I would avoid _ in table names - it is also a wildcard in some 
systems
[09:06am] mje: but that might not matter - I just looked and I use it in 
DBD::Oracle and ODBC tests all over the place

[09:14am] mje: so unanswered questions arising from this are:
[09:14am] mje: who is the end user running the tests?
[09:15am] mje: can we do without (or cope without) create database?
[09:17am] mje: how are tables named to avoid parallism problems?
[09:17am] ribasushi: mje: no create database == we crap in users personal db 
(if they didn't get to make a fresh one just for us) == tables must be named in 
some distinct manner than anything else that may be in the table == logical-ish 
perl-ish _-prefix for "private"
[09:17am] [Sno]: mje: from my point of view - everyone of us and all other 
targetted people are end-users
[09:17am] ribasushi: I almost think that this must be a required entry in the 
config file
[09:17am] ribasushi: i.e. the user says "use this dsn and use this prefix"

[09:17am] timbunce_: I'm back.
[09:18am] mje: timbunce_, too late, we already got in a mess
[09:18am] [Sno]: currently the thing is most similar to "create database" is 
done for DBD::File and down in the DSN::Provider - and that part of "storage 
engine" backend should take care
[09:19am] timbunce_: I have a rough design that I think will address many of 
the issues raised here. I'll write it up in an email.
[09:19am] [Sno]: so there might be "storage engines" which can do the "create 
database" on their own - and there are some who might could be would prefer a 
hint and there're some who can't
[09:20am] mje: I am right in thinking the suggestions so far mean you cannot 
talk to the DBD to use before writing the tests and deciding on table names?
[09:22am] [Sno]: mje: not so much - look at 
https://github.com/perl5-dbi/DBI-Test/blob/master/lib/DBI/Test/Case/attributes/Error.pm
 what might not need a database or table 

[09:21am] timbunce_: The handling of "fixtures" will need to be abstracted.
[09:26am] [Sno]: timbunce_: before you sign off to write the mail - open points 
at variants to discuss?
[09:28am] timbunce_: [Sno] It's variants that I have a design for that I'm 
writing up. I'll also mention how I think TDB will fit in and the abstraction 
of fixtures.
[09:29am] [Sno]: I think, TDB should be split into two or more parts to fit in 
(think of currently it requires DBI which is a chicken/egg problem)
[09:30am] timbunce_: DBIT requires DBI as well 
[09:31am] timbunce_: but yes, or at least make DBI optional in TDB

[09:31am] BooK: I actually like the idea of a one-line test script running all 
tests in a set of subdirs as subtests, and looping over all available/matching 
dsn
[09:31am] timbunce_: DBIT doesn't want handles, it wants DSN+user+pass.
[09:31am] BooK: TDB provides that also
[09:35am] timbunce_: BooK: yeap. I see TDB's role as providing a "workspace" 
within which DBIT "fixtures" can be setup for the tests to run against.
[09:42am] timbunce_: BooK: but it's important that TDB can work in places where 
an existing db needs to be used. In some cases a CREATE SCHEMA could be used to 
create an isolated workspace within an existing DB. In others that won't be 
possible and an existing db/schema will need to be used.

[1:04pm] BooK: TDB right works by replying to queries (you ask for something, 
and it gives back what matches)
[1:05pm] BooK: if the metadata is there to know is you can create 
schema|db|table, it can weed out the stuff that can't do what you need
[1:05pm] timbunce_: BooK: any thoughts on "but it's important that TDB can work 
in places where an existing db needs to be used. In some cases a CREATE SCHEMA 
could be used to create an isolated workspace within an existing DB. In others 
that won't be possible and an existing db/schema will need to be used."
[1:06pm] timbunce_: I think that just means the user indicating their 
requirement in the TDB config file (and TDB honouring it).
[1:11pm] timbunce_: BooK: "if the metadata is there", you mean in the config 
file?
[1:11pm] BooK: for example
[1:11pm] BooK: seems it's the only way to know that a given dbh will allow 
which actions
[1:12pm] BooK: althought the TDB driver could probably get the info itself in 
some cases
[1:12pm] BooK: I'm sure a user can query their own list of permissions on 
mysql, for example
[1:12pm] BooK: well, I hope
[1:13pm] BooK: actually, right now the requirements are given in the "request"
[1:14pm] BooK: the current interface is @handles = Test::Database->handles( { 
dbd => 'mysql', version => '4.1' }, 'SQLite' );
[1:15pm] BooK: and you'll get a list of TDB object that can provide connection 
info to a mysql with version >= 4.1 or a SQLite
[1:15pm] BooK: whatever is available
[1:15pm] timbunce_: I see TDBs role as "give me access to a 'workspace' where I 
can play with the DBI" where a user has defined a list of workspaces. TDB 
should look after the details of "setting up" the workspace if needed, e.g. 
CREATE DATABASE, or CREATE SCHEMA, or "do nothing because the user has created 
one already".
[1:15pm] BooK: right
[1:16pm] timbunce_: great, I'm glad you agreed.
[1:16pm] BooK: the user in my mind was the cpan tester who setup the smoke box
[1:16pm] • timbunce_ nods

[1:16pm] ribasushi: so someone who has "root" in other words 
[1:16pm] BooK: it needs to be able to run several tests suites in parallel, but 
workspaces can only be shared inside a test suite
[1:17pm] BooK: ribasushi: the basic install will probably give you only SQLite 
and CSV
[1:17pm] timbunce_: parallel issues can wait for now - I suspect they'll be an 
interaction with fixtures
[1:17pm] BooK: right now, a "test suite" is identified by its cwd
[1:17pm] BooK: (at test script startup, further chdir should'nt affect the 
outcome)
[1:18pm] BooK: (more likely at "use Test::Database" time, actually)
[1:18pm] timbunce_: BooK: I'm unclear about the purpose of that identification
[1:18pm] • timbunce_ should reread the docs
[1:19pm] BooK: if a test suite has t/00-setup.t, t/xx-sometest.t and 
t/99-teardown.t
[1:19pm] BooK: we want all scripts to get the same worksapce
[1:19pm] timbunce_: ah, and thus break parallel running 
[1:20pm] BooK: well, startup has to happen first, doesn't it?
[1:20pm] BooK: also, right now, TDB returns a list of things that match the 
query
[1:20pm] [Sno]: timbunce_: https://rt.cpan.org/Ticket/Display.html?id=59732 - 
I'd like to avoid such an RT in future ^^
[1:20pm] BooK: in my examples, I do for $handle (@handles) { ... } and run the 
same tests in each workspace
[1:20pm] [Sno]: so yes, not a prio - but when can have it cheap, take it
[1:20pm] ribasushi: timbunce_: it is possible to preserve parallel testing (I 
mean plain prove -j, with no bullshit like Test::Aggregate) while having *some* 
sequencing in place
[1:21pm] ribasushi: timbunce_: of course the reliance on file names makes prove 
-sj impossible, but meh
[1:22pm] BooK: this morning, I realized that the looping over handles is not 
something a test author should need to write
[1:22pm] ribasushi: BooK: what do you mean by "startup" ?
[1:22pm] BooK: ribasushi: create table ?
[1:22pm] BooK: then use them in the tests, and destroy everything at the end
[1:22pm] BooK: actually there should be a "wipe_workspace" option
[1:24pm] BooK: ribasushi: for application tests, you'll probably want to setup 
the test of tables, feed in some data (maybe restore a dump? or do it using 
your application logic), and the just use and update that in subsequent tests
[1:24pm] BooK: s/the test of tables/the set of tables/
[1:24pm] ribasushi: mmmm create table/fixturizing should probably be per test, 
as opposed to once per test group
[1:24pm] [Sno]: BooK: we discussed that at Lancaster with (I forgot the name) 
and got a hint how to do that
[1:25pm] [Sno]: but when having some kind of templating engine (which is more 
than proposed), it results in riba's hint - it's on test.t file
[1:25pm] BooK: [Sno]: andk ?
[1:25pm] ribasushi: with separate *harness* processes doing the setup / test / 
teardown a lot of the flexibility and self-containment is lost
[1:26pm] [Sno]: I don't think - andk I would remember
[1:26pm] ribasushi: at the very least I can not longer do prove -l <random test>
[1:26pm] timbunce_: I want prove -l <random test> to work. And prove -jN
[1:26pm] BooK: ribasushi: how much of that is in the tests or in the test 
framework we're talking about creating?
[1:27pm] ribasushi: this doesn't matter, the end result is important
[1:27pm] BooK: the setup phase can be long, and you only ever do it once when 
running a live app
[1:27pm] ribasushi: i.e. the deploy/fixtures can very well be templated
[1:27pm] ribasushi: but the point is they need to be per test
[1:27pm] ribasushi: if our setup phase turns out to be long - something else is 
done wrong
[1:27pm] timbunce_: For DBIT the fixtures will be small
[1:28pm] BooK: ribasushi: well, maybe because it's not unit tests any more ?
[1:28pm] ribasushi: hm?
[1:28pm] ribasushi: BooK: we are speaking past each other I think...
[1:28pm] BooK: possibly 

[1:29pm] timbunce_: BooK: at what point does TDB create the database?
[1:29pm] BooK: timbunce_: it doesn't. its only role it to give you a dsn that 
matches your expectations
[1:29pm] ribasushi: <BooK> if a test suite has t/00-setup.t, t/xx-sometest.t 
and t/99-teardown.t
[1:29pm] ribasushi: timbunce_: ^^ as far as I can understand Book suggests 
stuff happens in the "first test"
[1:30pm] ribasushi: and then everything afterwards knows how to "get a long"
[1:30pm] ribasushi: which I think is unworkable
[1:30pm] timbunce_: BooK: so whats the create db in 
https://metacpan.org/source/BOOK/Test-Database-1.11/lib/Test/Database/Driver/Pg.pm
 for?
[1:30pm] timbunce_: does the code using TDB call it?
[1:31pm] mje: I'm probably very wrong but when I last looked TDB did have a 
create database method per DBD - that is what stopped me doing DBD::ODBC
[1:31pm] timbunce_: BooK: so it's $handle->driver->create_database?
[1:33pm] BooK: I think the use case I had in mind when I wrote this, is that 
you wanted a database handle, and that would be your scratchpad
[1:33pm] BooK: where you could assume that create and drop any table is fine
[1:34pm] BooK: the assumption is that the dsn points to an existing database
[1:34pm] BooK: you start at the create table level
[1:34pm] BooK: name()
[1:34pm] BooK: => Return the database name attached to the handle.
[1:34pm] BooK: so one db per handle / dsn
[1:35pm] timbunce_: BooK: when does create_database() get called and who calls 
it? TDB or the code that's using TDB?
[1:35pm] BooK: TDB
[1:35pm] timbunce_: ok, when?
[1:36pm] BooK:        
https://metacpan.org/module/BOOK/Test-Database-1.11/lib/Test/Database/Tutorial.pod#dsn-versus-driver_dsn
    
[1:37pm] timbunce_: BooK: so using driver_dsn= in the config triggers a create 
db but using dsn= doesnt?
[1:37pm] BooK: timbunce_: yes, exactly
[1:37pm] BooK:        
https://metacpan.org/source/BOOK/Test-Database-1.11/lib/Test/Database/Driver.pm#L152
    
[1:38pm] timbunce_: ah, that wasn't clear to me at all

[1:39pm] BooK: maybe the dsn= is limiting, and it should only have driver_dsn=
[1:40pm] BooK: and then one can decide to either work at that level (driver 
handle) directly, or be lazy and tell TDB, just give me database handle
[1:40pm] timbunce_: BooK: maybe. Seems like some refactoring and exposing of 
lower level parts would be handy.
[1:40pm] BooK: yup
[1:40pm] BooK: and TDB would create a test_table_xxx for those lazy guys
[1:41pm] timbunce_: no (at least not for DBIT)
[1:41pm] BooK: timbunce_: optionally
[1:41pm] timbunce_:                          

[1:41pm] BooK: if TDB only worked with driver_dsn, it could give you that and 
you're free to go
[1:41pm] timbunce_: That's a can of worms I'd recommend you avoid (or put in a 
separate distro)
[1:42pm] BooK: or you could say "I don't care for that, just make me a 
sandwich^Wdatabase"
[1:42pm] BooK: timbunce_: isn't it already opened in the current TDB ?
[1:42pm] BooK: I just need to close the bit where it can work with database only
[1:42pm] BooK: and instead make it ALWAYS require being given a driver dsn
[1:43pm] timbunce_: (BooK: "create a test_table_xxx" gets you into a whole 
bunch of issues about columns names)
[1:43pm] BooK: I meant test_database, sorry
[1:43pm] BooK: and offer the existing convenience methods for the lazy and 
impatient
[1:43pm] timbunce_: ah!
[1:43pm] BooK: indeed, I don't want to create tables
[1:43pm] • timbunce_ phew!
[1:43pm] BooK: that's the test fixtures job, right?
[1:43pm] • timbunce_ nods

[1:45pm] timbunce_: So, BooK, any thoughts/plans/proposals for specific changes 
for TDB?
[1:47pm] BooK: seems to me that I need to get rid of the dsn= bit
[1:47pm] BooK: then you guys can obtain a driver to use as you see fit
[1:48pm] • mje thinks BooK has done rather well at remembering what he wrote 
many years ago Book++
[1:48pm] BooK: mje: I'm looking at the code, and only remember my motivations 
[1:48pm] mje: regardless Book++
[1:48pm] BooK: it's one of those modules that I wrote with motivation and no 
use case
[1:48pm] BooK: timbunce_: and I can continue to provide databases to the 
existing users
[1:49pm] BooK: and spit warnings at them when I see dsn= in the config file
[1:49pm] BooK: then we can work on the metadata thingy (if that's still needed)
[1:49pm] timbunce_: I think DBIT should let TDB create the DB (or schema) if 
that's what the user wants (expressed via the config).
[1:52pm] timbunce_: else DBIT would have to duplicate that logic anyway
[1:52pm] timbunce_: TDB is the right home for it

[1:53pm] [Sno]: timbunce_: less explicit that's the way DBD::DBM and DBD::CSV 
handle it currently, too - so yes, getting the database from TDB is wanted 
(regardless how TDB get's it - by creating, config, env, guessing, brewing, ...)
[1:54pm] [Sno]: we usually have a "driver" for that and calling some kind of 
accessor which in background creates or opens the stuff
[1:55pm] [Sno]: and IIRC, andk bugged Tux with DBD::CSV by blindly creating the 
database (directory) and want to have it configurable
[1:55pm] [Sno]: the conclusion to get the database (schema, …) from TDB makes 
everyone happy
[1:56pm] • timbunce_ nods

[2:01pm] BooK: how do you proceed from there?
[2:01pm] BooK: what do you need ? the TDB::Driver or the TDB::Handle objects?
[2:01pm] BooK: sounds to me like the TDB::Driver thingy is enough
[2:04pm] timbunce_: BooK: I suspect we have enough for the moment and we'll get 
back to you with specific things (and maybe pull requests  ) later. Thanks!


Some thoughts...

* TDB's role is to provide a dbh connected to a 'workspace' where we
can run tests based on a user-provided config file.

That "workspace" might be a newly created database, or a newly created
schema, or an existing schema in an existing database. DBIT shouldn't
have to care.

The only other role of TDB would be to provide a way to delete the
database or schema if one was created by TDB in the first place.

Umm. Having written that out I find myself wondering if there's much
value in TDB creating databases or schemas for DBIT. Since DBIT has to
support the case of using an existing database we could simply mandate
that TDB creating databases or schemas is outside the scope of DBIT.
I.e., DBIT doesn't care how TDB provided the workspace and doesn't
try to trigger any TDB cleanup.

I think this means that DBIT would just require dsn= entries in the TDB
config file (not driver_dsn=). Essentially we'd just be using TDB to
read the config file and nothing else.


* DBIT fixture management, to be discussed in more detail later, can
assume that it can create tables in the workspace provided by TDB.

If table creation fails due to lack of privs then the test that requires
that fixture will be skipped. (Fixtures should be designed to avoid
needing tables where possible, e.g. a fixture to return a single row
could be implemented using "SELECT 'foo' as k, 42 as v". That way as
many tests as possible can still be run even where tables can't be
created.)


* The timing of the creation, population, and deletion of test tables is
a key design issue we need to resolve.

We need to allow parallel running of tests, ie prove -j.

I think that rules out the use of test names eg t/00-setup.t & t/99-teardown.t
because other tests may run concurrently with them and thus fail.
Arguably prove could be extended to support setup/teardown but even
then t/00-setup.t can't know all the fixtures that all the tests might
need. So we need to create fixture tables as-needed/on-demand.

For cleanup an individual DBIT test script/process could drop any
created fixture tables at the end of the process. (That would work fine
but be a bit slow as a few common fixture tables get recreated and
repopulated over and over.  I've outlined an optimization below.)

Our tables names need a prefix and structure that's very unlikely to be
used in an existing database. Since we might be using an existing
database we can't simply drop the db or schema to cleanup, so we need a
way to find and cleanup any old fixtures from aborted test runs.

So we need a useful naming convention. This is bound to become a bikeshed
issue but I'll start with this suggestion:

  dbit1_${yymmdd}_${dirhash}__$name  # /^dbit(\d)_(\d{6})_(\w+?)__(\w+)$/

The yymmdd field is the date the table was created and allows items from
old runs to be deleted.

The dirhash is a hash based on the directory the test suite is in, so
different people can run tests from separate build dirs in the same
workspace without clashing.  (Reusing the idea from TDB.)

The 1 in dbit1 and the double underscore before the $name are extra insurance.

Note that whatever naming convention is used should meet those needs
but also be fully abstracted behind a fixture API - so it could be
adapted to work with a database that doesn't support underscores
and/or wierd cases like DBD::LDAP.


Tim.

p.s. Here's a possible optimization for test fixture creation:

Read-write fixtures need to use dynamically generated unique table names
so parallel test execution won't clash.  They'd need to be dropped at
the end of the test process.

However, read-only fixtures can have more 'fixed' table names that are
created and populated once.  Since they're not modified by the test we
can leave them in the database for later tests to re-use.

That then creates the problem of when and how to delete them.
Anyway, it may turn out that there's no need for this optimization.
For now we can just note that potential value in distinguishing
read-only vs read-write fixtures.

Reply via email to