You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current analyzer employs a fundamental string parsing logic (regular expressions, string splitting ..), which means that it is not guaranteed to be 100% accurate. This is because some characters, such as double quotes, can be interpreted as special characters by the analyzer.
will also not be parsed correctly because the comma(,) is interpreted as a delimiter.
As this doesn't represent a universal scenario, I'm not sure whether to keep as a known issue or implement a shared lexer and parser to handle these cases more comprehensively.
The text was updated successfully, but these errors were encountered:
You're right. guaranteeing perfection is challenging due to our tool's reliance on regular expressions and string matching for analysis. Creating a Lexer/Parser would involve abstracting the code, considering each language's syntax, and identifying endpoints. This would necessitate significant changes to our current structure.
While I agree it's the right long-term direction, taking the first step is proving to be tough 😨
The current analyzer employs a fundamental string parsing logic (regular expressions, string splitting ..), which means that it is not guaranteed to be 100% accurate. This is because some characters, such as double quotes, can be interpreted as special characters by the analyzer.
For example, the following Python code:
will not be parsed correctly by the analyzer because the double quotes(") are interpreted as part of the path variable.
Similarly, the following Go code:
will also not be parsed correctly because the comma(,) is interpreted as a delimiter.
As this doesn't represent a universal scenario, I'm not sure whether to keep as a known issue or implement a shared lexer and parser to handle these cases more comprehensively.
The text was updated successfully, but these errors were encountered: