Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Choose between NFKC and NFC normalization for Unicode characters so copy-pasting works #1282

Open
sfllaw opened this issue Mar 24, 2024 · 5 comments
Assignees

Comments

@sfllaw
Copy link

sfllaw commented Mar 24, 2024

Describe the proposed feature

HocrTransform.normalize_text normalizes text using the NFKC1 compatibilty algorithm.

@classmethod
def normalize_text(cls, s: str) -> str:
"""Normalize the given text using the NFKC normalization form."""
return unicodedata.normalize("NFKC", s)

As explained in #1272, it does this so that searching for Bauernstube will match Bauernſtube in naïve PDF readers.

Unfortunately, this means that copy-pasting text out of the OCRed PDF will result in the former text, which will not match the rastered image that the user sees.

If there were an option to choose between NFKC and NFC normalization forms, then the author could opt to render the text more faithfully. In my case, I was surprised that was normalized to 11/2, which is a very different number!

Footnotes

  1. Unicode® Standard Annex #15: UNICODE NORMALIZATION FORMS

@sfllaw
Copy link
Author

sfllaw commented Mar 26, 2024

I am happy to submit a patch, if you accept contributions. I would suggest having a command-line option like --normalize-unicode=NFKC, which should be the default. Obviously, the documentation will have to describe why you would want to pick NFC over NFKC. I think it shouldn’t offer NFD normalization, but if someone has a valid use-case, it can be easily added on later.

I am also open to other command-line flags, if you think that users learning about Unicode normalization is too much to impose on them.

@jbarlow83
Copy link
Collaborator

@sfllaw I appreciate the suggestion but I think what will really be needed is to insert markup into the PDF that allows competent PDF readers to see what is going on - and then testing to see if it helps sufficiently.

If you want to attempt the relevant portion of the spec is below:

image

@sfllaw
Copy link
Author

sfllaw commented Mar 27, 2024

@jbarlow83 If I understand correctly, you are suggesting that we use ActualText to clobber the invisible GlyphLessFont text that Tesseract produces with the NFC normalization?

That is, for the above example with the scanned , OCRmyPDF would produce something like:

/Span<</ActualText(1½)>>
BDC
11/2 Tj
EMC

Maybe I am misunderstanding your proposal, because it seems like this will depend on how the PDF reader deals with ActualText? I have tried this using Evince, Okular, qpdfview, xpdf, and Chrome and they all don’t match 1/2 when searching, because the invisible text has been overridden.

Because of this, I can’t think of an advantage over skipping NFKC altogether and rendering the NFC version in GlyphLessFont.

Does ActualText work as you expect in your PDF reader? Did you have a different example in mind?

Also, it looks like some PDF readers don’t handle non-trivial ActualText correctly, but I have not investigated this deeply: ho-tex/accsupp#2

@jbarlow83
Copy link
Collaborator

A few key points here:

  1. The data (not text) inside parentheses at the relevant data should be treated as binary data, and if it happens to resemble text, that's just a convenient coincidence.
  2. The binary data inside parentheses in a PDF is interpreted in PdfDocEncoding not UTF-8 or any other encoding. Use '½'.encode('pdfdoc') to perform the conversion to bytes.
  3. The binary data inside parentheses is a list of character IDs to render. The font defines what glyph IDs to render for a given character IDs (accents can be rendered as multiple glyph IDs), and (hopefully) provides a mapping from character IDs to Unicode. Most fonts are sane and make character IDs equal to Unicode numbers, but that is by no means required.

When using parenthesis in a content stream the character IDs must be encoded in pdfdoc. However ½ is U+00BD which is b'\xbd' in pdfdoc. If you encode a content stream in UTF-8, ½ would be encoded as b'\xc2\xbd' which is not equivalent. Does the hexdump show /ActualText(... 31 BD ...) or /ActualText(... 31 C2 BD...)? If the latter, that would explain why the text was not recognized - it looks like '1½' in pdfdoc.

In reference to the final point, GlyphLessFont defines itself as having a 1:1 mapping of Unicode to character ID, and then maps all character IDs to glyph ID 1, which is a blank cell. Actual text is supposed to supply an alternate list of characters IDs that are used for searching and copy-pasting, but not for rendering, such as in the example from the PDF reference manual, where hyphenation is present in the rendered text but eliminated in ActualText.

All that said, it's quite possible most PDF viewers don't respect ActualText even when it's properly encoded.

@sfllaw
Copy link
Author

sfllaw commented Mar 27, 2024

Thank you for all the details about the intricate Unicode handling in PDFs! However, I’d like to pop the stack and talk about the bigger picture.

When OCRmyPDF encounters the ocrx_word in an hOCR file, it normalizes that to 11/2, which is a much larger number! You suggested that ActualText markup would allow competent PDF readers to see what is going on, but I don't understand how that would work better than doing NFC normalization instead of NFKC. Since OCRmyPDF already typesets invisible text, why do we need to add ActualText on top of it?

I’d really like to solve this bug in a way that you’d be happy with. Could you please help me understand your proposal?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants