Hi everyone, Vault is an identity-based secret and encryption management system. Using Vault’s CLI, or HTTP API, access to secrets and other sensitive data can be securely stored and managed, tightly controlled (restricted), and audited.
plugin name: `vault-auth` How does the plugin perform authentication? The plugin can have the following schema - *accesskey*: a key that identifies the owner of the credential - *secretkey*: it validates the ownership of the accesskey If not provided inside consumer schema they are 16-byte hex-encoded generated random strings. FYI, vault runs as a client-server architecture. As the plugin needs to communicate with the vault server, we can store the server config (hostname, kv prefix (kv/apisix), vault access token for HTTP calls) into the default-config yaml file. For the initial version, we are going to implement a lua module for the following exported functions: - get(key) - key is a path suffix of vault kv engine. it queries the vault server and returns the data. (eg: get (auth-key/key1) -- > retrieves data from kv/apisix/auth-key/key1 path [after joining with kv prefix] ) - set(key, value) - key is the path suffix as usual and data is a lua table. ($ vault kv put /kv/apisix/a foo=bar --> equivalent to `vault.set(a, {foo="bar"}) - delete(key) - delete operaiton. Now on the consumer side, we will: - create a consumer with the plugin `vault-auth` enabled. (HTTP PUT to apisix) - After schema validation, we check if vault-auth is in plugin conf. If yes, we just store the accesskey into etcd with the relative details for path /consumers/${consumer-username} - we store the ${secretkey} into vault for path `auth-key/${accesskey}` - return the response body with resp.body.vault["data-stored"] = {secretkey: ${secretkey}} and http 201. after this, etcd just contains the access key (just it, not the secret key) and any GET request for the consumer ${consumer-username} we will query vault (get) with the access key retrieved from etcd and return as res.body.vault["data-fetched"] = ${fetched response} Now while authenticating requests for upstream with `vault-auth` enabled (inside the rewrite method), 1. we extract the access key and secret key from the request header(or query params) 2. fetches the secret key from the vault for access key received in step 1 3. matches with the secret key from step 1 In between 1&2, we can keep a local cache table that can be checked if the access key at all exists or not (will save a round trip to the vault server for no reason if the access key is invalid). What's wrong with storing the data into etcd (our default storage)? etcd isn't meant for key management. We can't assume keys are always static. There could be dynamic keys, ephemeral keys. Many companies use vault as a defacto solution for key rolling, update and management. The keys can be managed externally or programmatically. We, the apisix developers, take the current truth and matches it with the request data. [Future-roadmap] - For the first version, the scope is pretty limited, we are just using the vault kv engine. Vault also supports numerous secret engines [1], by using it as an adapter that could make the existing authentication plugin of Apache APISIX dependent upon the vault stored data. This is a topic of future discussion in another mail thread. - For authentication, in the first version, we are going with the vault tokens. Users can create a policy to restrict access to APISIX over the vault data. The policies can be written via HCL (HashiCorp Configuration Language) - which gives access to anything inside kv engine that starts with kv/apisix prefix. ``` path "kv/apisix/*" { capabilities = ["read", "create", "update"] } ``` We will support other methods [2] in the later versions. First, we just need an MVP. Any suggestions or questions about it, feel free to reply. Thanks and Regards, Bisakh <https://github.com/bisakhmondal> [1]: https://www.vaultproject.io/docs/secrets [2]: https://www.vaultproject.io/docs/auth