>> I see that binaries better named as:
>>
>>  slon    -> slond
>>  slonik* -> slonc (with subcommands..."c" being for "client")
>>
>> Anyway, the names themselves are not the most important thing, but the
>> quality of different but similar names is.
>>
>> Are others interested in continuing to have a higher-level interface
>> that doesn't require using slonik directly for common operations?
>>
>>     
>
> I recently started to think about an Object Oriented "easy-to-use" 
> altperl-library
> which implements most of the functionality embedded into a single perl 
> library which
> could be used in a clean, distributable manner. I've done a first prototype 
> named
> SlonConfig.pm which implements a class for configuring, administering and 
> monitoring
> of a slon cluster, but it's nothing more than a prototype for now. What i 
> could imagine
> is an interface like
>
> $conf = SlonConfig->new(<params or config of cluster>)
>
> $conf->addNode(<node>);
> $conf->getStatus(<cluster>);
> $conf->subscribe(<params of set>);
> $conf->move(<params>);
>
> Each instance of such an object would describe a cluster configuration and 
> would provide
> an easy interface to the underlying infrastructure (maybe the same way as 
> altperl currently does
> by generating slonik-scripts), for example
>
> slonc --subscribe 1 2
> slonc --createset 1
> slonc --failover node1 node2
>
> etc...
>   
I'd like to offer my 0.02€ regarding this subject:

1) I think the whole wrap-slonik-with-perl approach is generally wrong 
-- what I would really like to see is that slonik itself would be 
modified so that it became more interactive client, like psql. Now I 
don't know anything about how slonik works, and I don't even know if my 
suggestion is feasible, but I think it would be the most simple and 
effective way to control slony. For example, you could have slonc -d 
<clustername> and then execute any commands necessary (I agree with Mark 
Stosberg with the naming issue, slonc is better than slonik). Also slonc 
could have -f switch for files etc.

2) In my opinion slony could have a better monitoring/maintaining 
interface. Many of the recent posts at the list have been dealing with 
this issue. For example with the slonc-approach explained above, you 
could have slonc --ping <clustername>, which would "ping" all nodes in a 
cluster. Also, it would be great if slonik could guarantee that if a 
command is executed at one node, it is executed at all nodes -- like a 
slonik transaction. I have had many cases where for example a ddl 
command executed with slonik's "execute script" has been succesfully 
executed at master, but has failed at slaves. The only solution I have 
come up so far for this situation is "delete * from _cluster.sl_event 
where ev_type <> 'SYNC'".

Like I said, I don't know anything about the internals of slony, so I 
don't know if my suggestions are possible to implement, but I would like 
to hear your comments on the subject. And don't get me wrong, I am using 
slony at the moment and I think it is a great piece of software. Keep up 
the good work!

Regards

MP
_______________________________________________
Slony1-general mailing list
[email protected]
http://gborg.postgresql.org/mailman/listinfo/slony1-general

Reply via email to