HNNewShowAskJobs
Built with Tanstack Start
Glyph: Scaling Context Windows via Visual-Text Compression(github.com)
24 points by foruhar 4 days ago | 3 comments
  • phildoughertya day ago

    Can someone compare/contrast with deepseek-ocr?

  • ghoul29 hours ago

    I asked this question on another post and was downvoted, trying again: don't we lose the "contextualization" that LLM embeddings do (embedding on Token X contains not just information about X, but also of all tokens that came before X in the context, causing different embedding for "flies" in "time flies like an arrow" vs "fruit flies like a banana")?

    The image embeddings, as I currently understand, are just pixel values of a block of pixels.

    What am I missing?

  • kburmana day ago

    This looks very promising. Are there any downsides or potential gotchas?