jnlt3 opened a new pull request, #2129:
URL: https://github.com/apache/datafusion-sqlparser-rs/pull/2129
## Description of Issue
Upon encountering '?', the tokenizer first consumes the token and peeks to
match any the following: '|' , '&', '-', '#'. If none of the symbols are
present it will call `consume_and_return(chars, Token::Question)` which
consumes an additional character but only returns a `Token::Question`. This is
also reflected in `tokenize_with_location` where the relevant `Token::Question`
will have a span of 2 characters.
## Reproducing the Issue
Both tests will fail on the current main branch.
```rs
use sqlparser::{
dialect::PostgreSqlDialect,
tokenizer::{Token, TokenWithSpan, Tokenizer},
};
#[test]
pub fn example_1() {
let query_lhs = "x?";
let query_rhs = "x?a";
let tokens_lhs = Tokenizer::new(&PostgreSqlDialect {}, query_lhs)
.tokenize()
.unwrap();
let tokens_rhs = Tokenizer::new(&PostgreSqlDialect {}, query_rhs)
.tokenize()
.unwrap();
assert_ne!(tokens_lhs, tokens_rhs);
}
#[test]
pub fn example_2() {
let tokens = Tokenizer::new(&PostgreSqlDialect {}, "x?a")
.tokenize_with_location()
.unwrap();
for token in tokens {
if let TokenWithSpan {
token: Token::Question,
span,
} = token
{
assert_eq!(span.start.column + 1, span.end.column)
}
}
}
```
## The Proposed Fix
The PR replaces the call to `self.consume_and_return(chars,
Token::Question)` with `Ok(Some(Token::Question))` no longer consuming the
additional token.
## Additional considerations
As far as I am aware, `Token::Question` is not a valid PostgreSQL token and
the best course of action might be to explicitly not support it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]