Hi,

I do think the idea is good. I've commented on the pull request with
some technical issues. It would be an API compatibility break, but
coincidentally I'm planning such a break for a release at the end of
this year, so that time frame could work.

What situation were you running into that this helped with? What kind
of performance difference did you see in your testing?

On Sat, 2023-01-07 at 23:44 +0300, Владимир Лямин wrote:
> Could you write to me if I can continue to work in this direction?
> 
> сб, 7 янв. 2023 г. в 23:41, Владимир Лямин <st067...@student.spbu.ru>
> :
> > Hello, I'm Lyamin Vladimir. First-year master of St. Petersburg
> > State University. I decided to optimize the running time of the
> > pcmk__unpack_constraints function, since there is a loop over all
> > the data here. I decided to use a hash table to optimize this.
> > 
> > A hash table structure has been added, as well as functions to
> > manage it.
> > pe_resource_t* compareKey(const char* key, struct set *array);
> > int getHash(const char *S);
> > void push(Node **head, pe_resource_t* data);
> > void insert(char* key, pe_resource_t* data, struct set *array);
> > void init_array(struct set **array);
> > void insert_children(pe_resource_t * rsc, struct set *hashTable);
> > 
> > Existing functions have also been changed: pcmk__unpack_constraints
> > (initialization of the hash table) and
> > pcmk__find_constraint_resource (search for the desired resource)
-- 
Ken Gaillot <kgail...@redhat.com>

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/developers

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to