Hi Shally,

//snip//;
> >> +  /**< window size range in bytes */
> >> +  int support_dict;
> >> +  /** Indicate algo support for dictionary load */
> >[Fiona] I need more info on this - What else is needed on the API to support 
> >it?
> [Shally] We will need to add an API, say, rte_comp_set_dictionary() to load 
> pre-set dictionary.
> If a particular algorithm implementation supports dictionary load, then app 
> can call this API before starting compression.
[Fiona] ok. so maybe submit in a patch separate to hash on github.
With any needed APIs.
 
> 
> >
> >> +  struct rte_comp_hash_algo_capability *hash_capa;
> >> +  /** pointer to an array of hash capability struct */
> >[Fiona] Does the hash capability depend on the compression algorithm?
> >I'd have thought it's completely independent as it's computed
> >on the uncompressed data?
> >I suppose the same can be said of checksum.
> 
> [Shally] Ok. I miss to set background here.
> Though practically hash can be performed standalone on data but we are 
> proposing it as a feature that can
> be performed along with compression and so
> is the assumption for crc/adler. Thus , we proposed it as part of compression 
> algo capability. Ex.
> Implementation may support Deflate+sha1 but not lzs+sha1.
> Do you want to propose hash/checksum as standalone capability on compression 
> API? shouldn't
> standalone hash a part of  crypto spec? similar is question
> for checksum because app can use processor optimized crc/adler library func, 
> if required. So, I see
> standalone hash/checksums as duplicated feature on compression.
> 
> I will issue next patch on github, once we this requirement is resolved.
[Fiona] No, I wasn't suggesting as standalone op - just that I thought it a 
device had the capability
to do hash it could probably do it regardless of which algo is used. 
But if you think there's a case where a device has deflate+hash capability and 
not lzs+sha1
then the hash capability can be per-algo. Based on above can it just be added 
to the 
existing feature-flags bitmask? Probably no need for the separate hash and algo 
capability
structs you were proposing since all capabilities are per-algo.

> 
> >This all seems a bit awkward.
> >Would something like this be better:
> >
> >enum rte_comp_capability_type {
> >  RTE_COMP_CAPA_TYPE_ALGO,
> >  RTE_COMP_CAPA_TYPE_CHKSUM,
> >  RTE_COMP_CAPA_TYPE_HASH
> >}
> >
> >struct rte_comp_algo_capability {
> >               enum rte_comp_algorithm algo;
> >     /** Compression Algorithm */
> >     uint64_t comp_feature_flags;
> >     /**< bitmap of flags for compression service features supported with 
> > this algo */
> >     struct rte_comp_param_range window_size;
> >     /**< window size range in bytes supported with this algo */
> >               /* int support_dict; */
> >              /** Indicate algo support for dictionary load */
> >};
> >
> >struct rte_compressdev_capabilities {
> >{
> >     rte_comp_capability_type capa_type;
> >     union {
> >         struct rte_comp_algo_capability algo_capa,
> >         int checksum_capa, /* bitmask of supported checksums  */
> >         int hash_capa, /* bitmask of supported hashes - this could be a 
> > struct if more data needed */
> >     }
> >};
> >
> >>
> >>  /** Structure used to capture a capability of a comp device */
> >>  struct rte_compressdev_capabilities {
> >> -  /* TODO */
> >>    enum rte_comp_algorithm algo;
> >> +  /** Compression Algorithm */
> >> +  struct rte_comp_algo_capability alg_capa;
> >> +  /**< Compression algo capabilities */
> >>    uint64_t comp_feature_flags;
> >> -  /**< bitmap of flags for compression service features*/
> >> -  struct rte_comp_param_range window_size;
> >> -  /**< window size range in bytes */
> >> +  /**< bitmap of flags for compression service features */
> >>  };
> >> +
//snip

Reply via email to